00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 602 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.148 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.180 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.190 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.201 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.201 > git config core.sparsecheckout # timeout=10 00:00:05.212 > git read-tree -mu HEAD # timeout=10 00:00:05.227 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.245 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.245 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.347 [Pipeline] Start of Pipeline 00:00:05.361 [Pipeline] library 00:00:05.362 Loading library shm_lib@master 00:00:05.363 Library shm_lib@master is cached. Copying from home. 00:00:05.381 [Pipeline] node 00:00:05.390 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:05.392 [Pipeline] { 00:00:05.400 [Pipeline] catchError 00:00:05.401 [Pipeline] { 00:00:05.415 [Pipeline] wrap 00:00:05.425 [Pipeline] { 00:00:05.430 [Pipeline] stage 00:00:05.432 [Pipeline] { (Prologue) 00:00:05.453 [Pipeline] echo 00:00:05.455 Node: VM-host-SM0 00:00:05.462 [Pipeline] cleanWs 00:00:05.471 [WS-CLEANUP] Deleting project workspace... 00:00:05.471 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.476 [WS-CLEANUP] done 00:00:05.638 [Pipeline] setCustomBuildProperty 00:00:05.706 [Pipeline] httpRequest 00:00:05.725 [Pipeline] echo 00:00:05.727 Sorcerer 10.211.164.101 is alive 00:00:05.734 [Pipeline] httpRequest 00:00:05.738 HttpMethod: GET 00:00:05.738 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.739 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.752 Response Code: HTTP/1.1 200 OK 00:00:05.753 Success: Status code 200 is in the accepted range: 200,404 00:00:05.753 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.709 [Pipeline] sh 00:00:06.987 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.003 [Pipeline] httpRequest 00:00:07.029 [Pipeline] echo 00:00:07.030 Sorcerer 10.211.164.101 is alive 00:00:07.039 [Pipeline] httpRequest 00:00:07.043 HttpMethod: GET 00:00:07.044 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.044 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.045 Response Code: HTTP/1.1 200 OK 00:00:07.045 Success: Status code 200 is in the accepted range: 200,404 00:00:07.046 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:26.525 [Pipeline] sh 00:00:26.805 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:29.349 [Pipeline] sh 00:00:29.626 + git -C spdk log --oneline -n5 00:00:29.626 719d03c6a sock/uring: only register net impl if supported 00:00:29.626 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:29.626 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:29.626 6c7c1f57e accel: add sequence outstanding stat 00:00:29.626 3bc8e6a26 accel: add utility to put task 00:00:29.645 [Pipeline] withCredentials 00:00:29.653 > git --version # timeout=10 00:00:29.664 > git --version # 'git version 2.39.2' 00:00:29.678 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.680 [Pipeline] { 00:00:29.688 [Pipeline] retry 00:00:29.690 [Pipeline] { 00:00:29.707 [Pipeline] sh 00:00:29.983 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:32.546 [Pipeline] } 00:00:32.567 [Pipeline] // retry 00:00:32.572 [Pipeline] } 00:00:32.590 [Pipeline] // withCredentials 00:00:32.599 [Pipeline] httpRequest 00:00:32.629 [Pipeline] echo 00:00:32.630 Sorcerer 10.211.164.101 is alive 00:00:32.636 [Pipeline] httpRequest 00:00:32.640 HttpMethod: GET 00:00:32.640 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.640 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.642 Response Code: HTTP/1.1 200 OK 00:00:32.642 Success: Status code 200 is in the accepted range: 200,404 00:00:32.642 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:41.930 [Pipeline] sh 00:00:42.210 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:43.597 [Pipeline] sh 00:00:43.877 + git -C dpdk log --oneline -n5 00:00:43.877 eeb0605f11 version: 23.11.0 00:00:43.877 238778122a doc: update release notes for 23.11 00:00:43.877 46aa6b3cfc doc: fix description of RSS features 00:00:43.877 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:43.877 7e421ae345 devtools: support skipping forbid rule check 00:00:43.898 [Pipeline] writeFile 00:00:43.915 [Pipeline] sh 00:00:44.195 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:44.207 [Pipeline] sh 00:00:44.487 + cat autorun-spdk.conf 00:00:44.487 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.487 SPDK_TEST_NVMF=1 00:00:44.487 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.487 SPDK_TEST_USDT=1 00:00:44.487 SPDK_RUN_UBSAN=1 00:00:44.487 SPDK_TEST_NVMF_MDNS=1 00:00:44.487 NET_TYPE=virt 00:00:44.487 SPDK_JSONRPC_GO_CLIENT=1 00:00:44.487 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:44.487 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:44.487 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:44.494 RUN_NIGHTLY=1 00:00:44.496 [Pipeline] } 00:00:44.513 [Pipeline] // stage 00:00:44.531 [Pipeline] stage 00:00:44.533 [Pipeline] { (Run VM) 00:00:44.548 [Pipeline] sh 00:00:44.828 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:44.828 + echo 'Start stage prepare_nvme.sh' 00:00:44.828 Start stage prepare_nvme.sh 00:00:44.828 + [[ -n 4 ]] 00:00:44.828 + disk_prefix=ex4 00:00:44.828 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:44.828 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:44.828 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:44.828 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.828 ++ SPDK_TEST_NVMF=1 00:00:44.828 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.828 ++ SPDK_TEST_USDT=1 00:00:44.828 ++ SPDK_RUN_UBSAN=1 00:00:44.828 ++ SPDK_TEST_NVMF_MDNS=1 00:00:44.828 ++ NET_TYPE=virt 00:00:44.828 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:44.828 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:44.828 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:44.828 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:44.828 ++ RUN_NIGHTLY=1 00:00:44.828 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:44.828 + nvme_files=() 00:00:44.828 + declare -A nvme_files 00:00:44.828 + backend_dir=/var/lib/libvirt/images/backends 00:00:44.828 + nvme_files['nvme.img']=5G 00:00:44.828 + nvme_files['nvme-cmb.img']=5G 00:00:44.828 + nvme_files['nvme-multi0.img']=4G 00:00:44.828 + nvme_files['nvme-multi1.img']=4G 00:00:44.828 + nvme_files['nvme-multi2.img']=4G 00:00:44.828 + nvme_files['nvme-openstack.img']=8G 00:00:44.828 + nvme_files['nvme-zns.img']=5G 00:00:44.828 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:44.828 + (( SPDK_TEST_FTL == 1 )) 00:00:44.828 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:44.828 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:44.828 + for nvme in "${!nvme_files[@]}" 00:00:44.828 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:44.828 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.828 + for nvme in "${!nvme_files[@]}" 00:00:44.828 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:44.828 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.828 + for nvme in "${!nvme_files[@]}" 00:00:44.828 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:44.828 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:44.828 + for nvme in "${!nvme_files[@]}" 00:00:44.828 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:44.828 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.828 + for nvme in "${!nvme_files[@]}" 00:00:44.828 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:44.828 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.828 + for nvme in "${!nvme_files[@]}" 00:00:44.828 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:45.086 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:45.086 + for nvme in "${!nvme_files[@]}" 00:00:45.086 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:45.086 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:45.086 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:45.086 + echo 'End stage prepare_nvme.sh' 00:00:45.086 End stage prepare_nvme.sh 00:00:45.096 [Pipeline] sh 00:00:45.374 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:45.374 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:00:45.374 00:00:45.374 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:45.374 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:45.374 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:45.374 HELP=0 00:00:45.374 DRY_RUN=0 00:00:45.374 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:45.374 NVME_DISKS_TYPE=nvme,nvme, 00:00:45.374 NVME_AUTO_CREATE=0 00:00:45.374 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:45.374 NVME_CMB=,, 00:00:45.374 NVME_PMR=,, 00:00:45.374 NVME_ZNS=,, 00:00:45.374 NVME_MS=,, 00:00:45.374 NVME_FDP=,, 00:00:45.374 SPDK_VAGRANT_DISTRO=fedora38 00:00:45.374 SPDK_VAGRANT_VMCPU=10 00:00:45.374 SPDK_VAGRANT_VMRAM=12288 00:00:45.374 SPDK_VAGRANT_PROVIDER=libvirt 00:00:45.374 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:45.374 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:45.374 SPDK_OPENSTACK_NETWORK=0 00:00:45.374 VAGRANT_PACKAGE_BOX=0 00:00:45.374 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:45.374 FORCE_DISTRO=true 00:00:45.374 VAGRANT_BOX_VERSION= 00:00:45.374 EXTRA_VAGRANTFILES= 00:00:45.374 NIC_MODEL=e1000 00:00:45.374 00:00:45.374 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:45.374 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:48.654 Bringing machine 'default' up with 'libvirt' provider... 00:00:49.220 ==> default: Creating image (snapshot of base box volume). 00:00:49.220 ==> default: Creating domain with the following settings... 00:00:49.220 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720853216_a77760154a932eaa954b 00:00:49.220 ==> default: -- Domain type: kvm 00:00:49.220 ==> default: -- Cpus: 10 00:00:49.220 ==> default: -- Feature: acpi 00:00:49.220 ==> default: -- Feature: apic 00:00:49.220 ==> default: -- Feature: pae 00:00:49.220 ==> default: -- Memory: 12288M 00:00:49.220 ==> default: -- Memory Backing: hugepages: 00:00:49.220 ==> default: -- Management MAC: 00:00:49.220 ==> default: -- Loader: 00:00:49.220 ==> default: -- Nvram: 00:00:49.220 ==> default: -- Base box: spdk/fedora38 00:00:49.220 ==> default: -- Storage pool: default 00:00:49.220 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720853216_a77760154a932eaa954b.img (20G) 00:00:49.220 ==> default: -- Volume Cache: default 00:00:49.220 ==> default: -- Kernel: 00:00:49.220 ==> default: -- Initrd: 00:00:49.220 ==> default: -- Graphics Type: vnc 00:00:49.220 ==> default: -- Graphics Port: -1 00:00:49.220 ==> default: -- Graphics IP: 127.0.0.1 00:00:49.220 ==> default: -- Graphics Password: Not defined 00:00:49.220 ==> default: -- Video Type: cirrus 00:00:49.220 ==> default: -- Video VRAM: 9216 00:00:49.220 ==> default: -- Sound Type: 00:00:49.220 ==> default: -- Keymap: en-us 00:00:49.220 ==> default: -- TPM Path: 00:00:49.220 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:49.220 ==> default: -- Command line args: 00:00:49.220 ==> default: -> value=-device, 00:00:49.220 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:49.220 ==> default: -> value=-drive, 00:00:49.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:49.220 ==> default: -> value=-device, 00:00:49.220 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.220 ==> default: -> value=-device, 00:00:49.220 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:49.220 ==> default: -> value=-drive, 00:00:49.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:49.220 ==> default: -> value=-device, 00:00:49.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.220 ==> default: -> value=-drive, 00:00:49.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:49.220 ==> default: -> value=-device, 00:00:49.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.220 ==> default: -> value=-drive, 00:00:49.220 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:49.220 ==> default: -> value=-device, 00:00:49.220 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:49.479 ==> default: Creating shared folders metadata... 00:00:49.479 ==> default: Starting domain. 00:00:50.854 ==> default: Waiting for domain to get an IP address... 00:01:08.940 ==> default: Waiting for SSH to become available... 00:01:08.940 ==> default: Configuring and enabling network interfaces... 00:01:12.228 default: SSH address: 192.168.121.144:22 00:01:12.228 default: SSH username: vagrant 00:01:12.228 default: SSH auth method: private key 00:01:14.128 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:22.237 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:27.506 ==> default: Mounting SSHFS shared folder... 00:01:28.879 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.879 ==> default: Checking Mount.. 00:01:30.252 ==> default: Folder Successfully Mounted! 00:01:30.252 ==> default: Running provisioner: file... 00:01:31.185 default: ~/.gitconfig => .gitconfig 00:01:31.469 00:01:31.469 SUCCESS! 00:01:31.469 00:01:31.469 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:31.469 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:31.469 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:31.469 00:01:31.479 [Pipeline] } 00:01:31.499 [Pipeline] // stage 00:01:31.509 [Pipeline] dir 00:01:31.509 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:31.511 [Pipeline] { 00:01:31.526 [Pipeline] catchError 00:01:31.527 [Pipeline] { 00:01:31.540 [Pipeline] sh 00:01:31.821 + vagrant ssh-config --host vagrant 00:01:31.821 + sed -ne /^Host/,$p 00:01:31.821 + tee ssh_conf 00:01:35.105 Host vagrant 00:01:35.105 HostName 192.168.121.144 00:01:35.105 User vagrant 00:01:35.105 Port 22 00:01:35.105 UserKnownHostsFile /dev/null 00:01:35.105 StrictHostKeyChecking no 00:01:35.105 PasswordAuthentication no 00:01:35.105 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:35.105 IdentitiesOnly yes 00:01:35.105 LogLevel FATAL 00:01:35.105 ForwardAgent yes 00:01:35.105 ForwardX11 yes 00:01:35.105 00:01:35.116 [Pipeline] withEnv 00:01:35.118 [Pipeline] { 00:01:35.131 [Pipeline] sh 00:01:35.405 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:35.405 source /etc/os-release 00:01:35.405 [[ -e /image.version ]] && img=$(< /image.version) 00:01:35.405 # Minimal, systemd-like check. 00:01:35.405 if [[ -e /.dockerenv ]]; then 00:01:35.405 # Clear garbage from the node's name: 00:01:35.405 # agt-er_autotest_547-896 -> autotest_547-896 00:01:35.405 # $HOSTNAME is the actual container id 00:01:35.405 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:35.405 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:35.405 # We can assume this is a mount from a host where container is running, 00:01:35.405 # so fetch its hostname to easily identify the target swarm worker. 00:01:35.405 container="$(< /etc/hostname) ($agent)" 00:01:35.405 else 00:01:35.405 # Fallback 00:01:35.405 container=$agent 00:01:35.405 fi 00:01:35.405 fi 00:01:35.405 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:35.405 00:01:35.673 [Pipeline] } 00:01:35.692 [Pipeline] // withEnv 00:01:35.701 [Pipeline] setCustomBuildProperty 00:01:35.716 [Pipeline] stage 00:01:35.718 [Pipeline] { (Tests) 00:01:35.736 [Pipeline] sh 00:01:36.014 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:36.284 [Pipeline] sh 00:01:36.561 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:36.832 [Pipeline] timeout 00:01:36.832 Timeout set to expire in 40 min 00:01:36.833 [Pipeline] { 00:01:36.848 [Pipeline] sh 00:01:37.124 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:37.690 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:01:37.702 [Pipeline] sh 00:01:37.978 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:38.249 [Pipeline] sh 00:01:38.527 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:38.839 [Pipeline] sh 00:01:39.119 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:39.119 ++ readlink -f spdk_repo 00:01:39.119 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:39.119 + [[ -n /home/vagrant/spdk_repo ]] 00:01:39.119 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:39.119 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:39.119 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:39.119 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:39.119 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:39.119 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:39.119 + cd /home/vagrant/spdk_repo 00:01:39.119 + source /etc/os-release 00:01:39.119 ++ NAME='Fedora Linux' 00:01:39.119 ++ VERSION='38 (Cloud Edition)' 00:01:39.119 ++ ID=fedora 00:01:39.119 ++ VERSION_ID=38 00:01:39.119 ++ VERSION_CODENAME= 00:01:39.119 ++ PLATFORM_ID=platform:f38 00:01:39.119 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:39.119 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.119 ++ LOGO=fedora-logo-icon 00:01:39.119 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:39.119 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.119 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:39.119 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.119 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.119 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.119 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:39.119 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.119 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:39.119 ++ SUPPORT_END=2024-05-14 00:01:39.119 ++ VARIANT='Cloud Edition' 00:01:39.119 ++ VARIANT_ID=cloud 00:01:39.119 + uname -a 00:01:39.377 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:39.377 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:39.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:39.635 Hugepages 00:01:39.635 node hugesize free / total 00:01:39.635 node0 1048576kB 0 / 0 00:01:39.635 node0 2048kB 0 / 0 00:01:39.635 00:01:39.635 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.893 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:39.893 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:39.893 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:39.893 + rm -f /tmp/spdk-ld-path 00:01:39.893 + source autorun-spdk.conf 00:01:39.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.893 ++ SPDK_TEST_NVMF=1 00:01:39.893 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.893 ++ SPDK_TEST_USDT=1 00:01:39.893 ++ SPDK_RUN_UBSAN=1 00:01:39.893 ++ SPDK_TEST_NVMF_MDNS=1 00:01:39.893 ++ NET_TYPE=virt 00:01:39.893 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:39.893 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.893 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.893 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.893 ++ RUN_NIGHTLY=1 00:01:39.893 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.893 + [[ -n '' ]] 00:01:39.893 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:39.893 + for M in /var/spdk/build-*-manifest.txt 00:01:39.893 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.893 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.893 + for M in /var/spdk/build-*-manifest.txt 00:01:39.893 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.893 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:39.893 ++ uname 00:01:39.893 + [[ Linux == \L\i\n\u\x ]] 00:01:39.893 + sudo dmesg -T 00:01:39.893 + sudo dmesg --clear 00:01:39.893 + dmesg_pid=5894 00:01:39.893 + sudo dmesg -Tw 00:01:39.893 + [[ Fedora Linux == FreeBSD ]] 00:01:39.893 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.893 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.893 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.893 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.893 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.893 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.893 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.893 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.893 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.893 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.893 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.893 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.893 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.893 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.893 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:39.893 Test configuration: 00:01:39.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.893 SPDK_TEST_NVMF=1 00:01:39.893 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.893 SPDK_TEST_USDT=1 00:01:39.893 SPDK_RUN_UBSAN=1 00:01:39.893 SPDK_TEST_NVMF_MDNS=1 00:01:39.893 NET_TYPE=virt 00:01:39.893 SPDK_JSONRPC_GO_CLIENT=1 00:01:39.893 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.893 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:39.893 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:39.893 RUN_NIGHTLY=1 06:47:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:39.893 06:47:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.893 06:47:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.893 06:47:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.893 06:47:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.893 06:47:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.893 06:47:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.893 06:47:47 -- paths/export.sh@5 -- $ export PATH 00:01:39.893 06:47:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.893 06:47:47 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:40.152 06:47:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:40.152 06:47:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720853267.XXXXXX 00:01:40.152 06:47:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720853267.IQliqW 00:01:40.152 06:47:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:40.152 06:47:47 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:40.152 06:47:47 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:40.152 06:47:47 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:40.152 06:47:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:40.152 06:47:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.152 06:47:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:40.152 06:47:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:40.152 06:47:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.152 06:47:48 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:01:40.152 06:47:48 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:40.152 06:47:48 -- pm/common@17 -- $ local monitor 00:01:40.152 06:47:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.152 06:47:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.152 06:47:48 -- pm/common@25 -- $ sleep 1 00:01:40.152 06:47:48 -- pm/common@21 -- $ date +%s 00:01:40.152 06:47:48 -- pm/common@21 -- $ date +%s 00:01:40.152 06:47:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720853268 00:01:40.152 06:47:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720853268 00:01:40.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720853268_collect-vmstat.pm.log 00:01:40.152 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720853268_collect-cpu-load.pm.log 00:01:41.087 06:47:49 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:41.087 06:47:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.087 06:47:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.087 06:47:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.087 06:47:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.087 Sat Jul 13 06:47:49 AM UTC 2024 00:01:41.087 06:47:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.087 v24.09-pre-202-g719d03c6a 00:01:41.087 06:47:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.087 06:47:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.087 06:47:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.087 06:47:49 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.087 06:47:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.087 06:47:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.087 ************************************ 00:01:41.087 START TEST ubsan 00:01:41.087 ************************************ 00:01:41.087 using ubsan 00:01:41.087 06:47:49 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:41.087 00:01:41.087 real 0m0.000s 00:01:41.087 user 0m0.000s 00:01:41.087 sys 0m0.000s 00:01:41.087 06:47:49 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:41.087 06:47:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.087 ************************************ 00:01:41.087 END TEST ubsan 00:01:41.087 ************************************ 00:01:41.087 06:47:49 -- common/autotest_common.sh@1142 -- $ return 0 00:01:41.087 06:47:49 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:41.088 06:47:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:41.088 06:47:49 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:41.088 06:47:49 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:41.088 06:47:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.088 06:47:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.088 ************************************ 00:01:41.088 START TEST build_native_dpdk 00:01:41.088 ************************************ 00:01:41.088 06:47:49 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:41.088 eeb0605f11 version: 23.11.0 00:01:41.088 238778122a doc: update release notes for 23.11 00:01:41.088 46aa6b3cfc doc: fix description of RSS features 00:01:41.088 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:41.088 7e421ae345 devtools: support skipping forbid rule check 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:41.088 06:47:49 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:41.088 patching file config/rte_config.h 00:01:41.088 Hunk #1 succeeded at 60 (offset 1 line). 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:41.088 06:47:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:41.346 06:47:49 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:41.346 06:47:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:41.347 06:47:49 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.619 The Meson build system 00:01:46.619 Version: 1.3.1 00:01:46.619 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:46.619 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:46.619 Build type: native build 00:01:46.619 Program cat found: YES (/usr/bin/cat) 00:01:46.619 Project name: DPDK 00:01:46.619 Project version: 23.11.0 00:01:46.619 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.619 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:46.619 Host machine cpu family: x86_64 00:01:46.619 Host machine cpu: x86_64 00:01:46.619 Message: ## Building in Developer Mode ## 00:01:46.619 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.619 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:46.619 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.619 Program python3 found: YES (/usr/bin/python3) 00:01:46.619 Program cat found: YES (/usr/bin/cat) 00:01:46.619 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:46.619 Compiler for C supports arguments -march=native: YES 00:01:46.619 Checking for size of "void *" : 8 00:01:46.619 Checking for size of "void *" : 8 (cached) 00:01:46.619 Library m found: YES 00:01:46.619 Library numa found: YES 00:01:46.619 Has header "numaif.h" : YES 00:01:46.619 Library fdt found: NO 00:01:46.619 Library execinfo found: NO 00:01:46.619 Has header "execinfo.h" : YES 00:01:46.619 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.619 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.619 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.619 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.619 Run-time dependency openssl found: YES 3.0.9 00:01:46.619 Run-time dependency libpcap found: YES 1.10.4 00:01:46.619 Has header "pcap.h" with dependency libpcap: YES 00:01:46.619 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.619 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.619 Compiler for C supports arguments -Wformat: YES 00:01:46.619 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.619 Compiler for C supports arguments -Wformat-security: NO 00:01:46.619 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.619 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.619 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.619 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.619 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.619 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.619 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.619 Compiler for C supports arguments -Wundef: YES 00:01:46.619 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.619 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.619 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.619 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.619 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.619 Program objdump found: YES (/usr/bin/objdump) 00:01:46.619 Compiler for C supports arguments -mavx512f: YES 00:01:46.619 Checking if "AVX512 checking" compiles: YES 00:01:46.619 Fetching value of define "__SSE4_2__" : 1 00:01:46.619 Fetching value of define "__AES__" : 1 00:01:46.619 Fetching value of define "__AVX__" : 1 00:01:46.619 Fetching value of define "__AVX2__" : 1 00:01:46.619 Fetching value of define "__AVX512BW__" : (undefined) 00:01:46.619 Fetching value of define "__AVX512CD__" : (undefined) 00:01:46.619 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:46.619 Fetching value of define "__AVX512F__" : (undefined) 00:01:46.619 Fetching value of define "__AVX512VL__" : (undefined) 00:01:46.619 Fetching value of define "__PCLMUL__" : 1 00:01:46.619 Fetching value of define "__RDRND__" : 1 00:01:46.619 Fetching value of define "__RDSEED__" : 1 00:01:46.619 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.619 Fetching value of define "__znver1__" : (undefined) 00:01:46.619 Fetching value of define "__znver2__" : (undefined) 00:01:46.619 Fetching value of define "__znver3__" : (undefined) 00:01:46.619 Fetching value of define "__znver4__" : (undefined) 00:01:46.619 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.619 Message: lib/log: Defining dependency "log" 00:01:46.619 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.619 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.619 Checking for function "getentropy" : NO 00:01:46.619 Message: lib/eal: Defining dependency "eal" 00:01:46.619 Message: lib/ring: Defining dependency "ring" 00:01:46.619 Message: lib/rcu: Defining dependency "rcu" 00:01:46.619 Message: lib/mempool: Defining dependency "mempool" 00:01:46.619 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.619 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.619 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.619 Compiler for C supports arguments -mpclmul: YES 00:01:46.619 Compiler for C supports arguments -maes: YES 00:01:46.619 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.619 Compiler for C supports arguments -mavx512bw: YES 00:01:46.619 Compiler for C supports arguments -mavx512dq: YES 00:01:46.619 Compiler for C supports arguments -mavx512vl: YES 00:01:46.619 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.619 Compiler for C supports arguments -mavx2: YES 00:01:46.619 Compiler for C supports arguments -mavx: YES 00:01:46.619 Message: lib/net: Defining dependency "net" 00:01:46.619 Message: lib/meter: Defining dependency "meter" 00:01:46.619 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.619 Message: lib/pci: Defining dependency "pci" 00:01:46.619 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.619 Message: lib/metrics: Defining dependency "metrics" 00:01:46.619 Message: lib/hash: Defining dependency "hash" 00:01:46.619 Message: lib/timer: Defining dependency "timer" 00:01:46.619 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.619 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:46.619 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:46.619 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:46.619 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:46.619 Message: lib/acl: Defining dependency "acl" 00:01:46.619 Message: lib/bbdev: Defining dependency "bbdev" 00:01:46.619 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:46.619 Run-time dependency libelf found: YES 0.190 00:01:46.619 Message: lib/bpf: Defining dependency "bpf" 00:01:46.619 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:46.619 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.619 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.619 Message: lib/distributor: Defining dependency "distributor" 00:01:46.619 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.619 Message: lib/efd: Defining dependency "efd" 00:01:46.619 Message: lib/eventdev: Defining dependency "eventdev" 00:01:46.619 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:46.619 Message: lib/gpudev: Defining dependency "gpudev" 00:01:46.619 Message: lib/gro: Defining dependency "gro" 00:01:46.619 Message: lib/gso: Defining dependency "gso" 00:01:46.619 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:46.619 Message: lib/jobstats: Defining dependency "jobstats" 00:01:46.619 Message: lib/latencystats: Defining dependency "latencystats" 00:01:46.619 Message: lib/lpm: Defining dependency "lpm" 00:01:46.619 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.619 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:46.619 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:46.619 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:46.619 Message: lib/member: Defining dependency "member" 00:01:46.619 Message: lib/pcapng: Defining dependency "pcapng" 00:01:46.619 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.619 Message: lib/power: Defining dependency "power" 00:01:46.619 Message: lib/rawdev: Defining dependency "rawdev" 00:01:46.619 Message: lib/regexdev: Defining dependency "regexdev" 00:01:46.619 Message: lib/mldev: Defining dependency "mldev" 00:01:46.619 Message: lib/rib: Defining dependency "rib" 00:01:46.619 Message: lib/reorder: Defining dependency "reorder" 00:01:46.619 Message: lib/sched: Defining dependency "sched" 00:01:46.620 Message: lib/security: Defining dependency "security" 00:01:46.620 Message: lib/stack: Defining dependency "stack" 00:01:46.620 Has header "linux/userfaultfd.h" : YES 00:01:46.620 Has header "linux/vduse.h" : YES 00:01:46.620 Message: lib/vhost: Defining dependency "vhost" 00:01:46.620 Message: lib/ipsec: Defining dependency "ipsec" 00:01:46.620 Message: lib/pdcp: Defining dependency "pdcp" 00:01:46.620 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.620 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:46.620 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:46.620 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:46.620 Message: lib/fib: Defining dependency "fib" 00:01:46.620 Message: lib/port: Defining dependency "port" 00:01:46.620 Message: lib/pdump: Defining dependency "pdump" 00:01:46.620 Message: lib/table: Defining dependency "table" 00:01:46.620 Message: lib/pipeline: Defining dependency "pipeline" 00:01:46.620 Message: lib/graph: Defining dependency "graph" 00:01:46.620 Message: lib/node: Defining dependency "node" 00:01:46.620 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.993 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.993 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.993 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.993 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:47.993 Compiler for C supports arguments -Wno-unused-value: YES 00:01:47.993 Compiler for C supports arguments -Wno-format: YES 00:01:47.993 Compiler for C supports arguments -Wno-format-security: YES 00:01:47.993 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:47.993 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:47.993 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:47.993 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:47.993 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.993 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.993 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:47.993 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:47.993 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:47.993 Has header "sys/epoll.h" : YES 00:01:47.993 Program doxygen found: YES (/usr/bin/doxygen) 00:01:47.993 Configuring doxy-api-html.conf using configuration 00:01:47.993 Configuring doxy-api-man.conf using configuration 00:01:47.993 Program mandb found: YES (/usr/bin/mandb) 00:01:47.993 Program sphinx-build found: NO 00:01:47.993 Configuring rte_build_config.h using configuration 00:01:47.993 Message: 00:01:47.993 ================= 00:01:47.993 Applications Enabled 00:01:47.993 ================= 00:01:47.993 00:01:47.993 apps: 00:01:47.993 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:47.993 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:47.993 test-pmd, test-regex, test-sad, test-security-perf, 00:01:47.993 00:01:47.993 Message: 00:01:47.993 ================= 00:01:47.993 Libraries Enabled 00:01:47.993 ================= 00:01:47.993 00:01:47.993 libs: 00:01:47.993 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.993 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:47.993 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:47.993 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:47.993 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:47.993 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:47.993 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:47.993 00:01:47.993 00:01:47.993 Message: 00:01:47.993 =============== 00:01:47.993 Drivers Enabled 00:01:47.993 =============== 00:01:47.993 00:01:47.993 common: 00:01:47.993 00:01:47.993 bus: 00:01:47.993 pci, vdev, 00:01:47.993 mempool: 00:01:47.993 ring, 00:01:47.993 dma: 00:01:47.993 00:01:47.993 net: 00:01:47.993 i40e, 00:01:47.993 raw: 00:01:47.993 00:01:47.993 crypto: 00:01:47.993 00:01:47.993 compress: 00:01:47.993 00:01:47.993 regex: 00:01:47.993 00:01:47.993 ml: 00:01:47.993 00:01:47.993 vdpa: 00:01:47.993 00:01:47.993 event: 00:01:47.993 00:01:47.993 baseband: 00:01:47.993 00:01:47.993 gpu: 00:01:47.993 00:01:47.993 00:01:47.993 Message: 00:01:47.993 ================= 00:01:47.993 Content Skipped 00:01:47.993 ================= 00:01:47.993 00:01:47.994 apps: 00:01:47.994 00:01:47.994 libs: 00:01:47.994 00:01:47.994 drivers: 00:01:47.994 common/cpt: not in enabled drivers build config 00:01:47.994 common/dpaax: not in enabled drivers build config 00:01:47.994 common/iavf: not in enabled drivers build config 00:01:47.994 common/idpf: not in enabled drivers build config 00:01:47.994 common/mvep: not in enabled drivers build config 00:01:47.994 common/octeontx: not in enabled drivers build config 00:01:47.994 bus/auxiliary: not in enabled drivers build config 00:01:47.994 bus/cdx: not in enabled drivers build config 00:01:47.994 bus/dpaa: not in enabled drivers build config 00:01:47.994 bus/fslmc: not in enabled drivers build config 00:01:47.994 bus/ifpga: not in enabled drivers build config 00:01:47.994 bus/platform: not in enabled drivers build config 00:01:47.994 bus/vmbus: not in enabled drivers build config 00:01:47.994 common/cnxk: not in enabled drivers build config 00:01:47.994 common/mlx5: not in enabled drivers build config 00:01:47.994 common/nfp: not in enabled drivers build config 00:01:47.994 common/qat: not in enabled drivers build config 00:01:47.994 common/sfc_efx: not in enabled drivers build config 00:01:47.994 mempool/bucket: not in enabled drivers build config 00:01:47.994 mempool/cnxk: not in enabled drivers build config 00:01:47.994 mempool/dpaa: not in enabled drivers build config 00:01:47.994 mempool/dpaa2: not in enabled drivers build config 00:01:47.994 mempool/octeontx: not in enabled drivers build config 00:01:47.994 mempool/stack: not in enabled drivers build config 00:01:47.994 dma/cnxk: not in enabled drivers build config 00:01:47.994 dma/dpaa: not in enabled drivers build config 00:01:47.994 dma/dpaa2: not in enabled drivers build config 00:01:47.994 dma/hisilicon: not in enabled drivers build config 00:01:47.994 dma/idxd: not in enabled drivers build config 00:01:47.994 dma/ioat: not in enabled drivers build config 00:01:47.994 dma/skeleton: not in enabled drivers build config 00:01:47.994 net/af_packet: not in enabled drivers build config 00:01:47.994 net/af_xdp: not in enabled drivers build config 00:01:47.994 net/ark: not in enabled drivers build config 00:01:47.994 net/atlantic: not in enabled drivers build config 00:01:47.994 net/avp: not in enabled drivers build config 00:01:47.994 net/axgbe: not in enabled drivers build config 00:01:47.994 net/bnx2x: not in enabled drivers build config 00:01:47.994 net/bnxt: not in enabled drivers build config 00:01:47.994 net/bonding: not in enabled drivers build config 00:01:47.994 net/cnxk: not in enabled drivers build config 00:01:47.994 net/cpfl: not in enabled drivers build config 00:01:47.994 net/cxgbe: not in enabled drivers build config 00:01:47.994 net/dpaa: not in enabled drivers build config 00:01:47.994 net/dpaa2: not in enabled drivers build config 00:01:47.994 net/e1000: not in enabled drivers build config 00:01:47.994 net/ena: not in enabled drivers build config 00:01:47.994 net/enetc: not in enabled drivers build config 00:01:47.994 net/enetfec: not in enabled drivers build config 00:01:47.994 net/enic: not in enabled drivers build config 00:01:47.994 net/failsafe: not in enabled drivers build config 00:01:47.994 net/fm10k: not in enabled drivers build config 00:01:47.994 net/gve: not in enabled drivers build config 00:01:47.994 net/hinic: not in enabled drivers build config 00:01:47.994 net/hns3: not in enabled drivers build config 00:01:47.994 net/iavf: not in enabled drivers build config 00:01:47.994 net/ice: not in enabled drivers build config 00:01:47.994 net/idpf: not in enabled drivers build config 00:01:47.994 net/igc: not in enabled drivers build config 00:01:47.994 net/ionic: not in enabled drivers build config 00:01:47.994 net/ipn3ke: not in enabled drivers build config 00:01:47.994 net/ixgbe: not in enabled drivers build config 00:01:47.994 net/mana: not in enabled drivers build config 00:01:47.994 net/memif: not in enabled drivers build config 00:01:47.994 net/mlx4: not in enabled drivers build config 00:01:47.994 net/mlx5: not in enabled drivers build config 00:01:47.994 net/mvneta: not in enabled drivers build config 00:01:47.994 net/mvpp2: not in enabled drivers build config 00:01:47.994 net/netvsc: not in enabled drivers build config 00:01:47.994 net/nfb: not in enabled drivers build config 00:01:47.994 net/nfp: not in enabled drivers build config 00:01:47.994 net/ngbe: not in enabled drivers build config 00:01:47.994 net/null: not in enabled drivers build config 00:01:47.994 net/octeontx: not in enabled drivers build config 00:01:47.994 net/octeon_ep: not in enabled drivers build config 00:01:47.994 net/pcap: not in enabled drivers build config 00:01:47.994 net/pfe: not in enabled drivers build config 00:01:47.994 net/qede: not in enabled drivers build config 00:01:47.994 net/ring: not in enabled drivers build config 00:01:47.994 net/sfc: not in enabled drivers build config 00:01:47.994 net/softnic: not in enabled drivers build config 00:01:47.994 net/tap: not in enabled drivers build config 00:01:47.994 net/thunderx: not in enabled drivers build config 00:01:47.994 net/txgbe: not in enabled drivers build config 00:01:47.994 net/vdev_netvsc: not in enabled drivers build config 00:01:47.994 net/vhost: not in enabled drivers build config 00:01:47.994 net/virtio: not in enabled drivers build config 00:01:47.994 net/vmxnet3: not in enabled drivers build config 00:01:47.994 raw/cnxk_bphy: not in enabled drivers build config 00:01:47.994 raw/cnxk_gpio: not in enabled drivers build config 00:01:47.994 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:47.994 raw/ifpga: not in enabled drivers build config 00:01:47.994 raw/ntb: not in enabled drivers build config 00:01:47.994 raw/skeleton: not in enabled drivers build config 00:01:47.994 crypto/armv8: not in enabled drivers build config 00:01:47.994 crypto/bcmfs: not in enabled drivers build config 00:01:47.994 crypto/caam_jr: not in enabled drivers build config 00:01:47.994 crypto/ccp: not in enabled drivers build config 00:01:47.994 crypto/cnxk: not in enabled drivers build config 00:01:47.994 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.994 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.994 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.994 crypto/mlx5: not in enabled drivers build config 00:01:47.994 crypto/mvsam: not in enabled drivers build config 00:01:47.994 crypto/nitrox: not in enabled drivers build config 00:01:47.994 crypto/null: not in enabled drivers build config 00:01:47.994 crypto/octeontx: not in enabled drivers build config 00:01:47.994 crypto/openssl: not in enabled drivers build config 00:01:47.994 crypto/scheduler: not in enabled drivers build config 00:01:47.994 crypto/uadk: not in enabled drivers build config 00:01:47.994 crypto/virtio: not in enabled drivers build config 00:01:47.994 compress/isal: not in enabled drivers build config 00:01:47.994 compress/mlx5: not in enabled drivers build config 00:01:47.994 compress/octeontx: not in enabled drivers build config 00:01:47.994 compress/zlib: not in enabled drivers build config 00:01:47.994 regex/mlx5: not in enabled drivers build config 00:01:47.994 regex/cn9k: not in enabled drivers build config 00:01:47.994 ml/cnxk: not in enabled drivers build config 00:01:47.994 vdpa/ifc: not in enabled drivers build config 00:01:47.994 vdpa/mlx5: not in enabled drivers build config 00:01:47.994 vdpa/nfp: not in enabled drivers build config 00:01:47.994 vdpa/sfc: not in enabled drivers build config 00:01:47.994 event/cnxk: not in enabled drivers build config 00:01:47.994 event/dlb2: not in enabled drivers build config 00:01:47.994 event/dpaa: not in enabled drivers build config 00:01:47.994 event/dpaa2: not in enabled drivers build config 00:01:47.994 event/dsw: not in enabled drivers build config 00:01:47.994 event/opdl: not in enabled drivers build config 00:01:47.994 event/skeleton: not in enabled drivers build config 00:01:47.994 event/sw: not in enabled drivers build config 00:01:47.994 event/octeontx: not in enabled drivers build config 00:01:47.994 baseband/acc: not in enabled drivers build config 00:01:47.994 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:47.994 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:47.994 baseband/la12xx: not in enabled drivers build config 00:01:47.994 baseband/null: not in enabled drivers build config 00:01:47.994 baseband/turbo_sw: not in enabled drivers build config 00:01:47.994 gpu/cuda: not in enabled drivers build config 00:01:47.994 00:01:47.994 00:01:47.994 Build targets in project: 220 00:01:47.994 00:01:47.994 DPDK 23.11.0 00:01:47.994 00:01:47.994 User defined options 00:01:47.994 libdir : lib 00:01:47.994 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:47.994 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:47.994 c_link_args : 00:01:47.994 enable_docs : false 00:01:47.994 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.994 enable_kmods : false 00:01:47.994 machine : native 00:01:47.994 tests : false 00:01:47.994 00:01:47.994 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.994 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:47.994 06:47:55 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:47.994 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:47.994 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.994 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.994 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.994 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.994 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.994 [6/710] Linking static target lib/librte_kvargs.a 00:01:47.994 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.994 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.994 [9/710] Linking static target lib/librte_log.a 00:01:47.994 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.253 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.511 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.511 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.511 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.511 [15/710] Linking target lib/librte_log.so.24.0 00:01:48.511 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.769 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.769 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.769 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.769 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.027 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.027 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:49.027 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.027 [24/710] Linking target lib/librte_kvargs.so.24.0 00:01:49.027 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:49.285 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.285 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.285 [28/710] Linking static target lib/librte_telemetry.a 00:01:49.285 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.286 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.286 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.545 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.545 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.545 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.545 [35/710] Linking target lib/librte_telemetry.so.24.0 00:01:49.545 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.803 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.803 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.803 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.803 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:49.803 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.803 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.803 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.803 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.061 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.061 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.061 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.061 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.319 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.319 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.319 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.578 [52/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.578 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.578 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.578 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.578 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.578 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.837 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.837 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:50.837 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.837 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.837 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.837 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.096 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.096 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.096 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.096 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.096 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.355 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.355 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.355 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.355 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.614 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.614 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.614 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.614 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.614 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.614 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.873 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.873 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.131 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.131 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.131 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.132 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.132 [85/710] Linking static target lib/librte_ring.a 00:01:52.391 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.391 [87/710] Linking static target lib/librte_eal.a 00:01:52.391 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.391 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.650 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.650 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.650 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.650 [93/710] Linking static target lib/librte_mempool.a 00:01:52.650 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.650 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.909 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.909 [97/710] Linking static target lib/librte_rcu.a 00:01:53.167 [98/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.167 [99/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.167 [100/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.167 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.167 [102/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.167 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.167 [104/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.424 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.424 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.681 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.681 [108/710] Linking static target lib/librte_mbuf.a 00:01:53.682 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.682 [110/710] Linking static target lib/librte_net.a 00:01:53.682 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.682 [112/710] Linking static target lib/librte_meter.a 00:01:53.939 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.939 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.939 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.939 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.939 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.939 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.204 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.785 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:54.785 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.043 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.043 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.043 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.300 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.300 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.300 [127/710] Linking static target lib/librte_pci.a 00:01:55.300 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.300 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.300 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.558 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.558 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.558 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.558 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.558 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.558 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.558 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.558 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.558 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.816 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.816 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.816 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.074 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.074 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.074 [145/710] Linking static target lib/librte_cmdline.a 00:01:56.333 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.333 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:56.333 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:56.333 [149/710] Linking static target lib/librte_metrics.a 00:01:56.333 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.899 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.899 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.899 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.899 [154/710] Linking static target lib/librte_timer.a 00:01:56.899 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.158 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.724 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:57.724 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:57.724 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:57.724 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:58.289 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:58.547 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:58.547 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:58.547 [164/710] Linking static target lib/librte_bitratestats.a 00:01:58.547 [165/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.547 [166/710] Linking static target lib/librte_ethdev.a 00:01:58.547 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.547 [168/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.547 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:58.547 [170/710] Linking static target lib/librte_bbdev.a 00:01:58.547 [171/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.805 [172/710] Linking static target lib/librte_hash.a 00:01:58.805 [173/710] Linking target lib/librte_eal.so.24.0 00:01:58.805 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:58.805 [175/710] Linking target lib/librte_ring.so.24.0 00:01:58.805 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:59.064 [177/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:59.064 [178/710] Linking target lib/librte_meter.so.24.0 00:01:59.064 [179/710] Linking target lib/librte_pci.so.24.0 00:01:59.064 [180/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:59.064 [181/710] Linking target lib/librte_rcu.so.24.0 00:01:59.064 [182/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:59.064 [183/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:59.064 [184/710] Linking target lib/librte_mempool.so.24.0 00:01:59.322 [185/710] Linking target lib/librte_timer.so.24.0 00:01:59.322 [186/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:59.322 [187/710] Linking static target lib/acl/libavx2_tmp.a 00:01:59.322 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:59.322 [189/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:59.322 [190/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:59.322 [191/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.322 [192/710] Linking static target lib/acl/libavx512_tmp.a 00:01:59.322 [193/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:59.322 [194/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:59.322 [195/710] Linking target lib/librte_mbuf.so.24.0 00:01:59.322 [196/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.581 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:59.581 [198/710] Linking target lib/librte_net.so.24.0 00:01:59.581 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:59.581 [200/710] Linking static target lib/librte_acl.a 00:01:59.581 [201/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:59.581 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:59.581 [203/710] Linking target lib/librte_bbdev.so.24.0 00:01:59.839 [204/710] Linking target lib/librte_cmdline.so.24.0 00:01:59.839 [205/710] Linking static target lib/librte_cfgfile.a 00:01:59.839 [206/710] Linking target lib/librte_hash.so.24.0 00:01:59.839 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:59.839 [208/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:00.097 [209/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:00.097 [210/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.097 [211/710] Linking target lib/librte_acl.so.24.0 00:02:00.097 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.097 [213/710] Linking target lib/librte_cfgfile.so.24.0 00:02:00.098 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:00.098 [215/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:00.358 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:00.616 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:00.616 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.616 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:00.616 [220/710] Linking static target lib/librte_bpf.a 00:02:00.616 [221/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.875 [222/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.875 [223/710] Linking static target lib/librte_compressdev.a 00:02:00.875 [224/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.875 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.875 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.134 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:01.134 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:01.134 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.392 [230/710] Linking target lib/librte_compressdev.so.24.0 00:02:01.392 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.392 [232/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:01.392 [233/710] Linking static target lib/librte_distributor.a 00:02:01.651 [234/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:01.651 [235/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.651 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.651 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:01.651 [238/710] Linking static target lib/librte_dmadev.a 00:02:02.219 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:02.219 [240/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.219 [241/710] Linking target lib/librte_dmadev.so.24.0 00:02:02.219 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.477 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:02.477 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:02.477 [245/710] Linking static target lib/librte_efd.a 00:02:02.477 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:02.736 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.736 [248/710] Linking static target lib/librte_cryptodev.a 00:02:02.736 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.736 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:02.736 [251/710] Linking target lib/librte_efd.so.24.0 00:02:03.303 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:03.303 [253/710] Linking static target lib/librte_dispatcher.a 00:02:03.303 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:03.303 [255/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:03.303 [256/710] Linking static target lib/librte_gpudev.a 00:02:03.561 [257/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.561 [258/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:03.561 [259/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:03.561 [260/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.561 [261/710] Linking target lib/librte_ethdev.so.24.0 00:02:03.561 [262/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:03.819 [263/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.819 [264/710] Linking target lib/librte_metrics.so.24.0 00:02:03.819 [265/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:04.076 [266/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.076 [267/710] Linking target lib/librte_bitratestats.so.24.0 00:02:04.076 [268/710] Linking target lib/librte_bpf.so.24.0 00:02:04.076 [269/710] Linking target lib/librte_cryptodev.so.24.0 00:02:04.076 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:04.076 [271/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:04.076 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:04.077 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:04.334 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.334 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:04.334 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:04.334 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:04.334 [278/710] Linking static target lib/librte_eventdev.a 00:02:04.334 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:04.334 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:04.334 [281/710] Linking static target lib/librte_gro.a 00:02:04.591 [282/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:04.591 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:04.591 [284/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.848 [285/710] Linking target lib/librte_gro.so.24.0 00:02:04.848 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:04.848 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:04.848 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:04.848 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:04.848 [290/710] Linking static target lib/librte_gso.a 00:02:05.106 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.106 [292/710] Linking target lib/librte_gso.so.24.0 00:02:05.378 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:05.378 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:05.378 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:05.378 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:05.378 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:05.378 [298/710] Linking static target lib/librte_jobstats.a 00:02:05.650 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:05.650 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:05.650 [301/710] Linking static target lib/librte_ip_frag.a 00:02:05.650 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:05.650 [303/710] Linking static target lib/librte_latencystats.a 00:02:05.909 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.909 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:05.909 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.909 [307/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.909 [308/710] Linking target lib/librte_latencystats.so.24.0 00:02:05.909 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:05.909 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:05.909 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:05.909 [312/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:06.166 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:06.166 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:06.166 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.166 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.166 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.424 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.682 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:06.682 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:06.682 [321/710] Linking static target lib/librte_lpm.a 00:02:06.682 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:06.682 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:06.682 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:06.682 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:06.940 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:06.940 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:06.940 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:06.940 [329/710] Linking static target lib/librte_pcapng.a 00:02:06.940 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.940 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:06.940 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:07.198 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:07.198 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:07.198 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.198 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:07.456 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.456 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.456 [339/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:07.714 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.714 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:07.714 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.714 [343/710] Linking static target lib/librte_power.a 00:02:07.714 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:07.714 [345/710] Linking static target lib/librte_regexdev.a 00:02:07.714 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:07.714 [347/710] Linking static target lib/librte_rawdev.a 00:02:07.972 [348/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:07.972 [349/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:07.972 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:08.230 [351/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:08.230 [352/710] Linking static target lib/librte_member.a 00:02:08.230 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:08.230 [354/710] Linking static target lib/librte_mldev.a 00:02:08.230 [355/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.230 [356/710] Linking target lib/librte_rawdev.so.24.0 00:02:08.488 [357/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:08.488 [358/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.488 [359/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.488 [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:08.488 [361/710] Linking target lib/librte_power.so.24.0 00:02:08.488 [362/710] Linking target lib/librte_member.so.24.0 00:02:08.488 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.746 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:08.746 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:08.746 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.746 [367/710] Linking static target lib/librte_reorder.a 00:02:08.746 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.004 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:09.004 [370/710] Linking static target lib/librte_rib.a 00:02:09.004 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:09.004 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:09.004 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:09.262 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.262 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:09.262 [376/710] Linking static target lib/librte_stack.a 00:02:09.262 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:09.262 [378/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:09.262 [379/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.520 [380/710] Linking static target lib/librte_security.a 00:02:09.520 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.520 [382/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.520 [383/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.520 [384/710] Linking target lib/librte_rib.so.24.0 00:02:09.520 [385/710] Linking target lib/librte_stack.so.24.0 00:02:09.520 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:09.520 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:09.778 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.778 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.778 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.778 [391/710] Linking target lib/librte_security.so.24.0 00:02:10.036 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:10.036 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.036 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:10.036 [395/710] Linking static target lib/librte_sched.a 00:02:10.602 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.602 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.602 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.602 [399/710] Linking target lib/librte_sched.so.24.0 00:02:10.602 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:10.602 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.861 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:11.119 [403/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.119 [404/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:11.377 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:11.377 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:11.377 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:11.636 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:11.636 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:11.636 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:11.895 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:11.895 [412/710] Linking static target lib/librte_ipsec.a 00:02:11.895 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:12.153 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.153 [415/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:12.153 [416/710] Linking target lib/librte_ipsec.so.24.0 00:02:12.153 [417/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:12.153 [418/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:12.410 [419/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:12.410 [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:12.410 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:12.410 [422/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:12.410 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:13.346 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:13.346 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:13.346 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:13.346 [427/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:13.346 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:13.346 [429/710] Linking static target lib/librte_fib.a 00:02:13.346 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:13.604 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:13.604 [432/710] Linking static target lib/librte_pdcp.a 00:02:13.604 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.604 [434/710] Linking target lib/librte_fib.so.24.0 00:02:13.863 [435/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.863 [436/710] Linking target lib/librte_pdcp.so.24.0 00:02:13.863 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:14.431 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:14.431 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:14.431 [440/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:14.431 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:14.431 [442/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:14.431 [443/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:14.690 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:14.948 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:14.948 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:14.948 [447/710] Linking static target lib/librte_port.a 00:02:15.206 [448/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.206 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:15.206 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:15.206 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:15.206 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:15.463 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:15.463 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.463 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:15.463 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:15.463 [457/710] Linking static target lib/librte_pdump.a 00:02:15.721 [458/710] Linking target lib/librte_port.so.24.0 00:02:15.721 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:15.721 [460/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:15.721 [461/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.980 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:16.239 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:16.239 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:16.239 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:16.497 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:16.497 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:16.497 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:16.756 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:16.756 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:16.756 [471/710] Linking static target lib/librte_table.a 00:02:17.014 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:17.014 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:17.274 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.274 [475/710] Linking target lib/librte_table.so.24.0 00:02:17.274 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:17.533 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:17.533 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:17.791 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:17.791 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:18.050 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:18.309 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:18.309 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:18.309 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:18.309 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:18.567 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:18.826 [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:18.826 [488/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:19.085 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:19.085 [490/710] Linking static target lib/librte_graph.a 00:02:19.085 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:19.086 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:19.343 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:19.601 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:19.601 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.601 [496/710] Linking target lib/librte_graph.so.24.0 00:02:19.601 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:19.601 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:19.859 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:20.117 [500/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:20.117 [501/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:20.117 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:20.117 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:20.414 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:20.414 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.414 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:20.672 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:20.672 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:20.930 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.930 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.930 [511/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:20.930 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.930 [513/710] Linking static target lib/librte_node.a 00:02:20.930 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.190 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.190 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.449 [517/710] Linking target lib/librte_node.so.24.0 00:02:21.449 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.449 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.449 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.449 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.449 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.708 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.708 [524/710] Linking static target drivers/librte_bus_pci.a 00:02:21.708 [525/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.708 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.708 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.708 [528/710] Linking static target drivers/librte_bus_vdev.a 00:02:21.967 [529/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.967 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.967 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:21.967 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:21.967 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:21.967 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:21.967 [535/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.967 [536/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:22.225 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:22.225 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.225 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.225 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:22.484 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.484 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.484 [543/710] Linking static target drivers/librte_mempool_ring.a 00:02:22.484 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.484 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:22.742 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:23.001 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:23.259 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:23.259 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:23.259 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:23.518 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:24.085 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:24.085 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:24.085 [554/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:24.085 [555/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:24.085 [556/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:24.343 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:24.601 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:24.859 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:24.859 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:25.116 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:25.116 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:25.679 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:25.679 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:25.679 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:25.937 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:26.194 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:26.452 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:26.452 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:26.452 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:26.452 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:26.452 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:26.452 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:26.710 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.710 [575/710] Linking static target lib/librte_vhost.a 00:02:26.968 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:26.968 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:26.968 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:26.968 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:27.226 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:27.226 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:27.483 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:27.740 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:27.740 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:27.740 [585/710] Linking static target drivers/librte_net_i40e.a 00:02:27.740 [586/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:27.740 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:27.740 [588/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:27.740 [589/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:27.740 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:27.740 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:27.997 [592/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.997 [593/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:27.997 [594/710] Linking target lib/librte_vhost.so.24.0 00:02:28.255 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:28.255 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.558 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:28.558 [598/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:28.558 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:28.816 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:29.073 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:29.073 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:29.073 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:29.073 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:29.331 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:29.331 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:29.589 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:29.847 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:29.847 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:29.847 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:30.104 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:30.104 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:30.104 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:30.362 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:30.362 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:30.362 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:30.362 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:30.620 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:30.620 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:30.878 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:31.136 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:31.136 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:31.136 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:32.070 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:32.070 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:32.070 [626/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:32.070 [627/710] Linking static target lib/librte_pipeline.a 00:02:32.070 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:32.070 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:32.328 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:32.328 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:32.328 [632/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:32.587 [633/710] Linking target app/dpdk-dumpcap 00:02:32.587 [634/710] Linking target app/dpdk-graph 00:02:32.587 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:32.845 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:32.845 [637/710] Linking target app/dpdk-pdump 00:02:32.845 [638/710] Linking target app/dpdk-proc-info 00:02:32.845 [639/710] Linking target app/dpdk-test-acl 00:02:33.103 [640/710] Linking target app/dpdk-test-cmdline 00:02:33.103 [641/710] Linking target app/dpdk-test-compress-perf 00:02:33.103 [642/710] Linking target app/dpdk-test-crypto-perf 00:02:33.361 [643/710] Linking target app/dpdk-test-dma-perf 00:02:33.361 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:33.361 [645/710] Linking target app/dpdk-test-fib 00:02:33.361 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:33.619 [647/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:33.619 [648/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:33.878 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:33.878 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:33.878 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:33.878 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:34.136 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:34.395 [654/710] Linking target app/dpdk-test-gpudev 00:02:34.395 [655/710] Linking target app/dpdk-test-eventdev 00:02:34.395 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:34.395 [657/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:34.395 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:34.653 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:34.653 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:34.653 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:34.910 [662/710] Linking target app/dpdk-test-flow-perf 00:02:34.910 [663/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.910 [664/710] Linking target app/dpdk-test-bbdev 00:02:34.910 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:34.910 [666/710] Linking target lib/librte_pipeline.so.24.0 00:02:35.167 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:35.167 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:35.167 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:35.424 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:35.424 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:35.682 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:35.682 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:35.682 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:35.682 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:35.938 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:36.195 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:36.195 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:36.195 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:36.453 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:36.453 [681/710] Linking target app/dpdk-test-pipeline 00:02:36.453 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:36.710 [683/710] Linking target app/dpdk-test-mldev 00:02:37.275 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:37.275 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:37.275 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:37.275 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:37.275 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:37.531 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:37.531 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:37.789 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:37.789 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:38.046 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:38.304 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:38.562 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:38.562 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:38.819 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:38.819 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:39.077 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:39.077 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:39.077 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:39.077 [702/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:39.335 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:39.335 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:39.335 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:39.593 [706/710] Linking target app/dpdk-test-regex 00:02:39.593 [707/710] Linking target app/dpdk-test-sad 00:02:39.851 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:40.109 [709/710] Linking target app/dpdk-testpmd 00:02:40.675 [710/710] Linking target app/dpdk-test-security-perf 00:02:40.675 06:48:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:40.675 06:48:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:40.675 06:48:48 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:40.675 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:40.675 [0/1] Installing files. 00:02:40.943 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:40.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:40.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:40.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:40.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:40.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.949 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:40.949 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.949 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.215 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.216 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.478 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.478 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.478 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.478 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:41.478 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.478 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.478 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.478 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.478 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.478 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.479 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.480 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.481 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:41.482 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:41.482 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:41.482 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:41.482 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:41.482 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:41.482 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:41.482 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:41.482 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:41.482 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:41.482 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:41.482 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:41.482 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:41.482 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:41.482 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:41.482 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:41.482 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:41.482 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:41.482 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:41.482 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:41.482 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:41.482 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:41.482 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:41.482 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:41.482 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:41.482 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:41.482 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:41.482 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:41.482 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:41.483 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:41.483 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:41.483 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:41.483 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:41.483 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:41.483 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:41.483 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:41.483 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:41.483 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:41.483 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:41.483 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:41.483 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:41.483 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:41.483 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:41.483 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:41.483 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:41.483 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:41.483 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:41.483 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:41.483 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:41.483 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:41.483 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:41.483 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:41.483 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:41.483 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:41.483 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:41.483 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:41.483 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:41.483 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:41.483 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:41.483 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:41.483 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:41.483 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:41.483 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:41.483 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:41.483 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:41.483 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:41.483 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:41.483 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:41.483 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:41.483 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:41.483 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:41.483 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:41.483 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:41.483 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:41.483 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:41.483 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:41.483 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:41.483 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:41.483 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:41.483 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:41.483 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:41.483 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:41.483 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:41.483 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:41.483 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:41.483 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:41.483 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:41.483 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:41.483 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:41.483 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:41.483 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:41.483 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:41.483 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:41.483 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:41.483 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:41.483 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:41.483 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:41.483 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:41.483 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:41.483 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:41.483 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:41.483 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:41.483 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:41.483 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:41.483 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:41.483 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:41.483 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:41.483 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:41.483 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:41.483 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:41.483 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:41.483 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:41.483 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:41.483 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:41.484 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:41.484 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:41.484 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:41.484 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:41.484 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:41.484 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:41.484 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:41.484 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:41.484 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:41.484 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:41.484 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:41.484 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:41.484 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:41.484 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:41.484 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:41.484 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:41.484 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:41.484 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:41.484 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:41.484 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:41.484 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:41.484 06:48:49 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:41.484 06:48:49 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:41.484 00:02:41.484 real 1m0.381s 00:02:41.484 user 7m18.355s 00:02:41.484 sys 1m11.767s 00:02:41.484 06:48:49 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:41.484 06:48:49 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:41.484 ************************************ 00:02:41.484 END TEST build_native_dpdk 00:02:41.484 ************************************ 00:02:41.742 06:48:49 -- common/autotest_common.sh@1142 -- $ return 0 00:02:41.742 06:48:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.742 06:48:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.742 06:48:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.742 06:48:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.742 06:48:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.742 06:48:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.742 06:48:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.742 06:48:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:02:41.742 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.742 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:41.742 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:42.000 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:42.257 Using 'verbs' RDMA provider 00:02:55.823 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:10.693 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:10.693 go version go1.21.1 linux/amd64 00:03:10.693 Creating mk/config.mk...done. 00:03:10.693 Creating mk/cc.flags.mk...done. 00:03:10.693 Type 'make' to build. 00:03:10.693 06:49:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:10.693 06:49:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:10.693 06:49:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:10.693 06:49:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.693 ************************************ 00:03:10.693 START TEST make 00:03:10.693 ************************************ 00:03:10.693 06:49:17 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:10.693 make[1]: Nothing to be done for 'all'. 00:03:37.222 CC lib/ut/ut.o 00:03:37.222 CC lib/ut_mock/mock.o 00:03:37.222 CC lib/log/log.o 00:03:37.222 CC lib/log/log_flags.o 00:03:37.222 CC lib/log/log_deprecated.o 00:03:37.222 LIB libspdk_ut.a 00:03:37.222 LIB libspdk_log.a 00:03:37.222 LIB libspdk_ut_mock.a 00:03:37.222 SO libspdk_ut.so.2.0 00:03:37.222 SO libspdk_log.so.7.0 00:03:37.222 SO libspdk_ut_mock.so.6.0 00:03:37.222 SYMLINK libspdk_ut.so 00:03:37.222 SYMLINK libspdk_log.so 00:03:37.222 SYMLINK libspdk_ut_mock.so 00:03:37.222 CXX lib/trace_parser/trace.o 00:03:37.222 CC lib/util/base64.o 00:03:37.222 CC lib/util/bit_array.o 00:03:37.222 CC lib/util/cpuset.o 00:03:37.222 CC lib/util/crc16.o 00:03:37.222 CC lib/util/crc32.o 00:03:37.222 CC lib/ioat/ioat.o 00:03:37.222 CC lib/util/crc32c.o 00:03:37.222 CC lib/dma/dma.o 00:03:37.222 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.222 CC lib/util/crc32_ieee.o 00:03:37.222 CC lib/vfio_user/host/vfio_user.o 00:03:37.222 CC lib/util/crc64.o 00:03:37.222 CC lib/util/dif.o 00:03:37.222 CC lib/util/fd.o 00:03:37.222 LIB libspdk_dma.a 00:03:37.222 SO libspdk_dma.so.4.0 00:03:37.222 CC lib/util/file.o 00:03:37.222 LIB libspdk_ioat.a 00:03:37.222 CC lib/util/hexlify.o 00:03:37.222 CC lib/util/iov.o 00:03:37.222 SYMLINK libspdk_dma.so 00:03:37.222 CC lib/util/math.o 00:03:37.222 SO libspdk_ioat.so.7.0 00:03:37.222 CC lib/util/pipe.o 00:03:37.222 LIB libspdk_vfio_user.a 00:03:37.222 CC lib/util/strerror_tls.o 00:03:37.222 SYMLINK libspdk_ioat.so 00:03:37.222 SO libspdk_vfio_user.so.5.0 00:03:37.222 CC lib/util/string.o 00:03:37.222 CC lib/util/uuid.o 00:03:37.222 SYMLINK libspdk_vfio_user.so 00:03:37.222 CC lib/util/fd_group.o 00:03:37.222 CC lib/util/xor.o 00:03:37.222 CC lib/util/zipf.o 00:03:37.222 LIB libspdk_util.a 00:03:37.222 SO libspdk_util.so.9.1 00:03:37.222 LIB libspdk_trace_parser.a 00:03:37.222 SYMLINK libspdk_util.so 00:03:37.222 SO libspdk_trace_parser.so.5.0 00:03:37.222 SYMLINK libspdk_trace_parser.so 00:03:37.222 CC lib/vmd/vmd.o 00:03:37.222 CC lib/conf/conf.o 00:03:37.222 CC lib/idxd/idxd.o 00:03:37.222 CC lib/rdma_provider/common.o 00:03:37.222 CC lib/vmd/led.o 00:03:37.222 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:37.222 CC lib/idxd/idxd_user.o 00:03:37.222 CC lib/json/json_parse.o 00:03:37.222 CC lib/rdma_utils/rdma_utils.o 00:03:37.222 CC lib/env_dpdk/env.o 00:03:37.222 CC lib/env_dpdk/memory.o 00:03:37.222 CC lib/idxd/idxd_kernel.o 00:03:37.222 LIB libspdk_rdma_provider.a 00:03:37.222 LIB libspdk_conf.a 00:03:37.222 SO libspdk_rdma_provider.so.6.0 00:03:37.222 CC lib/json/json_util.o 00:03:37.222 CC lib/json/json_write.o 00:03:37.222 SO libspdk_conf.so.6.0 00:03:37.222 LIB libspdk_rdma_utils.a 00:03:37.222 SYMLINK libspdk_rdma_provider.so 00:03:37.222 SO libspdk_rdma_utils.so.1.0 00:03:37.222 SYMLINK libspdk_conf.so 00:03:37.222 CC lib/env_dpdk/pci.o 00:03:37.222 CC lib/env_dpdk/init.o 00:03:37.222 CC lib/env_dpdk/threads.o 00:03:37.222 SYMLINK libspdk_rdma_utils.so 00:03:37.222 CC lib/env_dpdk/pci_ioat.o 00:03:37.222 CC lib/env_dpdk/pci_virtio.o 00:03:37.222 CC lib/env_dpdk/pci_vmd.o 00:03:37.222 CC lib/env_dpdk/pci_idxd.o 00:03:37.222 LIB libspdk_json.a 00:03:37.222 LIB libspdk_idxd.a 00:03:37.222 SO libspdk_json.so.6.0 00:03:37.222 SO libspdk_idxd.so.12.0 00:03:37.223 LIB libspdk_vmd.a 00:03:37.223 SYMLINK libspdk_json.so 00:03:37.223 CC lib/env_dpdk/pci_event.o 00:03:37.223 CC lib/env_dpdk/sigbus_handler.o 00:03:37.223 CC lib/env_dpdk/pci_dpdk.o 00:03:37.223 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:37.223 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.223 SO libspdk_vmd.so.6.0 00:03:37.223 SYMLINK libspdk_idxd.so 00:03:37.223 SYMLINK libspdk_vmd.so 00:03:37.223 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:37.223 CC lib/jsonrpc/jsonrpc_server.o 00:03:37.223 CC lib/jsonrpc/jsonrpc_client.o 00:03:37.223 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:37.223 LIB libspdk_jsonrpc.a 00:03:37.223 SO libspdk_jsonrpc.so.6.0 00:03:37.223 SYMLINK libspdk_jsonrpc.so 00:03:37.223 LIB libspdk_env_dpdk.a 00:03:37.223 SO libspdk_env_dpdk.so.14.1 00:03:37.223 CC lib/rpc/rpc.o 00:03:37.223 SYMLINK libspdk_env_dpdk.so 00:03:37.223 LIB libspdk_rpc.a 00:03:37.223 SO libspdk_rpc.so.6.0 00:03:37.223 SYMLINK libspdk_rpc.so 00:03:37.223 CC lib/notify/notify.o 00:03:37.223 CC lib/notify/notify_rpc.o 00:03:37.223 CC lib/trace/trace.o 00:03:37.223 CC lib/trace/trace_flags.o 00:03:37.223 CC lib/trace/trace_rpc.o 00:03:37.223 CC lib/keyring/keyring.o 00:03:37.223 CC lib/keyring/keyring_rpc.o 00:03:37.223 LIB libspdk_notify.a 00:03:37.223 SO libspdk_notify.so.6.0 00:03:37.480 SYMLINK libspdk_notify.so 00:03:37.480 LIB libspdk_trace.a 00:03:37.480 LIB libspdk_keyring.a 00:03:37.480 SO libspdk_trace.so.10.0 00:03:37.480 SO libspdk_keyring.so.1.0 00:03:37.480 SYMLINK libspdk_trace.so 00:03:37.480 SYMLINK libspdk_keyring.so 00:03:37.737 CC lib/thread/iobuf.o 00:03:37.737 CC lib/thread/thread.o 00:03:37.737 CC lib/sock/sock.o 00:03:37.737 CC lib/sock/sock_rpc.o 00:03:38.303 LIB libspdk_sock.a 00:03:38.303 SO libspdk_sock.so.10.0 00:03:38.303 SYMLINK libspdk_sock.so 00:03:38.870 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:38.870 CC lib/nvme/nvme_ctrlr.o 00:03:38.870 CC lib/nvme/nvme_fabric.o 00:03:38.870 CC lib/nvme/nvme_ns_cmd.o 00:03:38.870 CC lib/nvme/nvme_ns.o 00:03:38.870 CC lib/nvme/nvme_pcie_common.o 00:03:38.870 CC lib/nvme/nvme_pcie.o 00:03:38.870 CC lib/nvme/nvme.o 00:03:38.870 CC lib/nvme/nvme_qpair.o 00:03:39.435 LIB libspdk_thread.a 00:03:39.435 SO libspdk_thread.so.10.1 00:03:39.435 SYMLINK libspdk_thread.so 00:03:39.435 CC lib/nvme/nvme_quirks.o 00:03:39.435 CC lib/nvme/nvme_transport.o 00:03:39.694 CC lib/accel/accel.o 00:03:39.694 CC lib/blob/blobstore.o 00:03:39.694 CC lib/nvme/nvme_discovery.o 00:03:39.694 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:39.694 CC lib/blob/request.o 00:03:39.694 CC lib/init/json_config.o 00:03:39.952 CC lib/init/subsystem.o 00:03:39.952 CC lib/init/subsystem_rpc.o 00:03:39.952 CC lib/blob/zeroes.o 00:03:39.952 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.210 CC lib/blob/blob_bs_dev.o 00:03:40.210 CC lib/init/rpc.o 00:03:40.210 CC lib/accel/accel_rpc.o 00:03:40.210 CC lib/nvme/nvme_tcp.o 00:03:40.210 CC lib/nvme/nvme_opal.o 00:03:40.210 LIB libspdk_init.a 00:03:40.468 CC lib/nvme/nvme_io_msg.o 00:03:40.468 CC lib/accel/accel_sw.o 00:03:40.468 SO libspdk_init.so.5.0 00:03:40.468 SYMLINK libspdk_init.so 00:03:40.468 CC lib/nvme/nvme_poll_group.o 00:03:40.468 CC lib/virtio/virtio.o 00:03:40.468 CC lib/virtio/virtio_vhost_user.o 00:03:40.725 LIB libspdk_accel.a 00:03:40.725 CC lib/virtio/virtio_vfio_user.o 00:03:40.725 SO libspdk_accel.so.15.1 00:03:40.725 SYMLINK libspdk_accel.so 00:03:40.725 CC lib/nvme/nvme_zns.o 00:03:40.983 CC lib/virtio/virtio_pci.o 00:03:40.983 CC lib/nvme/nvme_stubs.o 00:03:40.983 CC lib/event/app.o 00:03:40.983 CC lib/bdev/bdev.o 00:03:40.983 CC lib/bdev/bdev_rpc.o 00:03:40.983 CC lib/bdev/bdev_zone.o 00:03:40.983 CC lib/nvme/nvme_auth.o 00:03:41.241 LIB libspdk_virtio.a 00:03:41.241 SO libspdk_virtio.so.7.0 00:03:41.241 CC lib/bdev/part.o 00:03:41.241 SYMLINK libspdk_virtio.so 00:03:41.241 CC lib/nvme/nvme_cuse.o 00:03:41.241 CC lib/event/reactor.o 00:03:41.241 CC lib/nvme/nvme_rdma.o 00:03:41.499 CC lib/bdev/scsi_nvme.o 00:03:41.499 CC lib/event/log_rpc.o 00:03:41.499 CC lib/event/app_rpc.o 00:03:41.499 CC lib/event/scheduler_static.o 00:03:41.757 LIB libspdk_event.a 00:03:41.757 SO libspdk_event.so.14.0 00:03:42.016 SYMLINK libspdk_event.so 00:03:42.605 LIB libspdk_blob.a 00:03:42.605 LIB libspdk_nvme.a 00:03:42.605 SO libspdk_blob.so.11.0 00:03:42.605 SYMLINK libspdk_blob.so 00:03:42.605 SO libspdk_nvme.so.13.1 00:03:42.862 CC lib/lvol/lvol.o 00:03:42.862 CC lib/blobfs/blobfs.o 00:03:42.862 CC lib/blobfs/tree.o 00:03:43.119 SYMLINK libspdk_nvme.so 00:03:43.377 LIB libspdk_bdev.a 00:03:43.377 SO libspdk_bdev.so.15.1 00:03:43.636 SYMLINK libspdk_bdev.so 00:03:43.636 LIB libspdk_blobfs.a 00:03:43.636 SO libspdk_blobfs.so.10.0 00:03:43.636 LIB libspdk_lvol.a 00:03:43.894 CC lib/scsi/dev.o 00:03:43.894 SO libspdk_lvol.so.10.0 00:03:43.894 CC lib/ftl/ftl_core.o 00:03:43.894 CC lib/nvmf/ctrlr.o 00:03:43.894 CC lib/scsi/lun.o 00:03:43.894 CC lib/nbd/nbd.o 00:03:43.894 CC lib/ublk/ublk.o 00:03:43.894 CC lib/ftl/ftl_init.o 00:03:43.894 CC lib/scsi/port.o 00:03:43.894 SYMLINK libspdk_blobfs.so 00:03:43.894 CC lib/ftl/ftl_layout.o 00:03:43.894 SYMLINK libspdk_lvol.so 00:03:43.894 CC lib/nbd/nbd_rpc.o 00:03:43.894 CC lib/scsi/scsi.o 00:03:43.894 CC lib/scsi/scsi_bdev.o 00:03:44.152 CC lib/scsi/scsi_pr.o 00:03:44.152 CC lib/scsi/scsi_rpc.o 00:03:44.152 CC lib/nvmf/ctrlr_discovery.o 00:03:44.152 CC lib/ublk/ublk_rpc.o 00:03:44.152 CC lib/nvmf/ctrlr_bdev.o 00:03:44.152 LIB libspdk_nbd.a 00:03:44.152 CC lib/ftl/ftl_debug.o 00:03:44.152 CC lib/scsi/task.o 00:03:44.152 SO libspdk_nbd.so.7.0 00:03:44.152 SYMLINK libspdk_nbd.so 00:03:44.410 CC lib/nvmf/subsystem.o 00:03:44.411 CC lib/ftl/ftl_io.o 00:03:44.411 CC lib/ftl/ftl_sb.o 00:03:44.411 CC lib/ftl/ftl_l2p.o 00:03:44.411 CC lib/nvmf/nvmf.o 00:03:44.411 LIB libspdk_ublk.a 00:03:44.411 SO libspdk_ublk.so.3.0 00:03:44.411 LIB libspdk_scsi.a 00:03:44.670 SYMLINK libspdk_ublk.so 00:03:44.670 CC lib/nvmf/nvmf_rpc.o 00:03:44.670 SO libspdk_scsi.so.9.0 00:03:44.670 CC lib/nvmf/transport.o 00:03:44.670 CC lib/nvmf/tcp.o 00:03:44.670 CC lib/ftl/ftl_l2p_flat.o 00:03:44.670 CC lib/nvmf/stubs.o 00:03:44.670 SYMLINK libspdk_scsi.so 00:03:44.670 CC lib/nvmf/mdns_server.o 00:03:44.937 CC lib/ftl/ftl_nv_cache.o 00:03:44.937 CC lib/iscsi/conn.o 00:03:44.937 CC lib/nvmf/rdma.o 00:03:45.200 CC lib/nvmf/auth.o 00:03:45.200 CC lib/vhost/vhost.o 00:03:45.200 CC lib/vhost/vhost_rpc.o 00:03:45.457 CC lib/vhost/vhost_scsi.o 00:03:45.457 CC lib/iscsi/init_grp.o 00:03:45.457 CC lib/iscsi/iscsi.o 00:03:45.457 CC lib/iscsi/md5.o 00:03:45.716 CC lib/iscsi/param.o 00:03:45.716 CC lib/ftl/ftl_band.o 00:03:45.716 CC lib/ftl/ftl_band_ops.o 00:03:45.974 CC lib/ftl/ftl_writer.o 00:03:45.974 CC lib/iscsi/portal_grp.o 00:03:45.974 CC lib/iscsi/tgt_node.o 00:03:45.974 CC lib/iscsi/iscsi_subsystem.o 00:03:45.974 CC lib/iscsi/iscsi_rpc.o 00:03:45.974 CC lib/iscsi/task.o 00:03:46.232 CC lib/ftl/ftl_rq.o 00:03:46.232 CC lib/ftl/ftl_reloc.o 00:03:46.232 CC lib/vhost/vhost_blk.o 00:03:46.232 CC lib/vhost/rte_vhost_user.o 00:03:46.232 CC lib/ftl/ftl_l2p_cache.o 00:03:46.232 CC lib/ftl/ftl_p2l.o 00:03:46.491 CC lib/ftl/mngt/ftl_mngt.o 00:03:46.491 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:46.491 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:46.491 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:46.749 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:46.749 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:46.749 LIB libspdk_iscsi.a 00:03:46.749 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:46.749 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.749 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.749 SO libspdk_iscsi.so.8.0 00:03:47.007 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:47.007 LIB libspdk_nvmf.a 00:03:47.007 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:47.007 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:47.007 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:47.007 SYMLINK libspdk_iscsi.so 00:03:47.007 CC lib/ftl/utils/ftl_conf.o 00:03:47.007 CC lib/ftl/utils/ftl_md.o 00:03:47.265 CC lib/ftl/utils/ftl_mempool.o 00:03:47.265 SO libspdk_nvmf.so.18.1 00:03:47.265 CC lib/ftl/utils/ftl_bitmap.o 00:03:47.265 CC lib/ftl/utils/ftl_property.o 00:03:47.265 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:47.265 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:47.265 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:47.265 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:47.265 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:47.265 SYMLINK libspdk_nvmf.so 00:03:47.265 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:47.524 LIB libspdk_vhost.a 00:03:47.524 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:47.524 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:47.524 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:47.524 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.524 SO libspdk_vhost.so.8.0 00:03:47.524 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.524 CC lib/ftl/base/ftl_base_dev.o 00:03:47.524 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.524 CC lib/ftl/ftl_trace.o 00:03:47.524 SYMLINK libspdk_vhost.so 00:03:47.782 LIB libspdk_ftl.a 00:03:48.040 SO libspdk_ftl.so.9.0 00:03:48.299 SYMLINK libspdk_ftl.so 00:03:48.558 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.817 CC module/keyring/file/keyring.o 00:03:48.817 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.817 CC module/accel/ioat/accel_ioat.o 00:03:48.817 CC module/accel/dsa/accel_dsa.o 00:03:48.817 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.817 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.817 CC module/accel/error/accel_error.o 00:03:48.817 CC module/sock/posix/posix.o 00:03:48.817 CC module/blob/bdev/blob_bdev.o 00:03:48.817 LIB libspdk_env_dpdk_rpc.a 00:03:48.817 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.817 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.817 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.817 CC module/keyring/file/keyring_rpc.o 00:03:49.076 LIB libspdk_scheduler_dpdk_governor.a 00:03:49.076 LIB libspdk_scheduler_gscheduler.a 00:03:49.076 CC module/accel/error/accel_error_rpc.o 00:03:49.076 CC module/accel/ioat/accel_ioat_rpc.o 00:03:49.076 LIB libspdk_scheduler_dynamic.a 00:03:49.076 SO libspdk_scheduler_gscheduler.so.4.0 00:03:49.076 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:49.076 SO libspdk_scheduler_dynamic.so.4.0 00:03:49.076 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:49.076 SYMLINK libspdk_scheduler_gscheduler.so 00:03:49.076 SYMLINK libspdk_scheduler_dynamic.so 00:03:49.076 LIB libspdk_accel_dsa.a 00:03:49.076 LIB libspdk_blob_bdev.a 00:03:49.076 LIB libspdk_keyring_file.a 00:03:49.076 SO libspdk_accel_dsa.so.5.0 00:03:49.076 LIB libspdk_accel_ioat.a 00:03:49.076 LIB libspdk_accel_error.a 00:03:49.076 SO libspdk_blob_bdev.so.11.0 00:03:49.076 SO libspdk_keyring_file.so.1.0 00:03:49.076 SO libspdk_accel_ioat.so.6.0 00:03:49.076 SO libspdk_accel_error.so.2.0 00:03:49.076 SYMLINK libspdk_accel_dsa.so 00:03:49.076 SYMLINK libspdk_blob_bdev.so 00:03:49.076 SYMLINK libspdk_keyring_file.so 00:03:49.334 SYMLINK libspdk_accel_ioat.so 00:03:49.334 SYMLINK libspdk_accel_error.so 00:03:49.334 CC module/keyring/linux/keyring.o 00:03:49.334 CC module/keyring/linux/keyring_rpc.o 00:03:49.334 CC module/accel/iaa/accel_iaa.o 00:03:49.334 CC module/accel/iaa/accel_iaa_rpc.o 00:03:49.334 LIB libspdk_keyring_linux.a 00:03:49.334 SO libspdk_keyring_linux.so.1.0 00:03:49.334 CC module/bdev/delay/vbdev_delay.o 00:03:49.334 CC module/blobfs/bdev/blobfs_bdev.o 00:03:49.592 LIB libspdk_accel_iaa.a 00:03:49.592 CC module/bdev/error/vbdev_error.o 00:03:49.592 CC module/bdev/gpt/gpt.o 00:03:49.592 CC module/bdev/lvol/vbdev_lvol.o 00:03:49.592 SO libspdk_accel_iaa.so.3.0 00:03:49.592 SYMLINK libspdk_keyring_linux.so 00:03:49.592 LIB libspdk_sock_posix.a 00:03:49.592 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:49.592 SO libspdk_sock_posix.so.6.0 00:03:49.592 CC module/bdev/malloc/bdev_malloc.o 00:03:49.592 CC module/bdev/null/bdev_null.o 00:03:49.592 SYMLINK libspdk_accel_iaa.so 00:03:49.592 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.592 SYMLINK libspdk_sock_posix.so 00:03:49.592 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.851 CC module/bdev/error/vbdev_error_rpc.o 00:03:49.851 CC module/bdev/nvme/bdev_nvme.o 00:03:49.851 CC module/bdev/null/bdev_null_rpc.o 00:03:49.851 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.851 LIB libspdk_bdev_delay.a 00:03:49.851 LIB libspdk_blobfs_bdev.a 00:03:49.851 LIB libspdk_bdev_gpt.a 00:03:49.851 SO libspdk_blobfs_bdev.so.6.0 00:03:49.851 SO libspdk_bdev_delay.so.6.0 00:03:49.851 CC module/bdev/raid/bdev_raid.o 00:03:49.851 SO libspdk_bdev_gpt.so.6.0 00:03:49.851 LIB libspdk_bdev_error.a 00:03:49.851 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:49.851 SO libspdk_bdev_error.so.6.0 00:03:49.851 SYMLINK libspdk_bdev_delay.so 00:03:49.851 SYMLINK libspdk_blobfs_bdev.so 00:03:49.851 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.851 SYMLINK libspdk_bdev_gpt.so 00:03:49.851 CC module/bdev/raid/bdev_raid_rpc.o 00:03:50.110 LIB libspdk_bdev_null.a 00:03:50.110 SYMLINK libspdk_bdev_error.so 00:03:50.110 SO libspdk_bdev_null.so.6.0 00:03:50.110 LIB libspdk_bdev_malloc.a 00:03:50.110 SYMLINK libspdk_bdev_null.so 00:03:50.110 SO libspdk_bdev_malloc.so.6.0 00:03:50.110 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:50.110 CC module/bdev/split/vbdev_split.o 00:03:50.110 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:50.110 CC module/bdev/aio/bdev_aio.o 00:03:50.110 CC module/bdev/split/vbdev_split_rpc.o 00:03:50.110 SYMLINK libspdk_bdev_malloc.so 00:03:50.110 CC module/bdev/raid/bdev_raid_sb.o 00:03:50.368 CC module/bdev/ftl/bdev_ftl.o 00:03:50.368 LIB libspdk_bdev_lvol.a 00:03:50.368 LIB libspdk_bdev_passthru.a 00:03:50.368 SO libspdk_bdev_lvol.so.6.0 00:03:50.368 SO libspdk_bdev_passthru.so.6.0 00:03:50.368 CC module/bdev/aio/bdev_aio_rpc.o 00:03:50.368 LIB libspdk_bdev_split.a 00:03:50.368 SYMLINK libspdk_bdev_lvol.so 00:03:50.368 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:50.368 SYMLINK libspdk_bdev_passthru.so 00:03:50.368 SO libspdk_bdev_split.so.6.0 00:03:50.368 SYMLINK libspdk_bdev_split.so 00:03:50.627 CC module/bdev/raid/raid0.o 00:03:50.627 CC module/bdev/raid/raid1.o 00:03:50.627 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:50.627 LIB libspdk_bdev_aio.a 00:03:50.627 CC module/bdev/raid/concat.o 00:03:50.627 CC module/bdev/iscsi/bdev_iscsi.o 00:03:50.627 SO libspdk_bdev_aio.so.6.0 00:03:50.627 LIB libspdk_bdev_ftl.a 00:03:50.627 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:50.627 SYMLINK libspdk_bdev_aio.so 00:03:50.627 SO libspdk_bdev_ftl.so.6.0 00:03:50.627 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:50.627 LIB libspdk_bdev_zone_block.a 00:03:50.627 SO libspdk_bdev_zone_block.so.6.0 00:03:50.627 SYMLINK libspdk_bdev_ftl.so 00:03:50.885 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:50.885 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:50.885 SYMLINK libspdk_bdev_zone_block.so 00:03:50.885 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:50.885 CC module/bdev/nvme/nvme_rpc.o 00:03:50.885 CC module/bdev/nvme/bdev_mdns_client.o 00:03:50.885 LIB libspdk_bdev_raid.a 00:03:50.885 SO libspdk_bdev_raid.so.6.0 00:03:50.885 CC module/bdev/nvme/vbdev_opal.o 00:03:50.885 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:50.885 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:50.885 LIB libspdk_bdev_iscsi.a 00:03:51.144 SYMLINK libspdk_bdev_raid.so 00:03:51.144 SO libspdk_bdev_iscsi.so.6.0 00:03:51.144 SYMLINK libspdk_bdev_iscsi.so 00:03:51.144 LIB libspdk_bdev_virtio.a 00:03:51.144 SO libspdk_bdev_virtio.so.6.0 00:03:51.401 SYMLINK libspdk_bdev_virtio.so 00:03:51.967 LIB libspdk_bdev_nvme.a 00:03:52.225 SO libspdk_bdev_nvme.so.7.0 00:03:52.225 SYMLINK libspdk_bdev_nvme.so 00:03:52.791 CC module/event/subsystems/scheduler/scheduler.o 00:03:52.791 CC module/event/subsystems/iobuf/iobuf.o 00:03:52.791 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:52.791 CC module/event/subsystems/vmd/vmd.o 00:03:52.791 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:52.791 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:52.791 CC module/event/subsystems/sock/sock.o 00:03:52.791 CC module/event/subsystems/keyring/keyring.o 00:03:52.791 LIB libspdk_event_sock.a 00:03:52.791 LIB libspdk_event_vhost_blk.a 00:03:52.791 LIB libspdk_event_keyring.a 00:03:52.791 LIB libspdk_event_scheduler.a 00:03:52.791 LIB libspdk_event_vmd.a 00:03:52.791 LIB libspdk_event_iobuf.a 00:03:52.791 SO libspdk_event_sock.so.5.0 00:03:52.791 SO libspdk_event_vhost_blk.so.3.0 00:03:52.791 SO libspdk_event_keyring.so.1.0 00:03:52.791 SO libspdk_event_scheduler.so.4.0 00:03:52.791 SO libspdk_event_vmd.so.6.0 00:03:52.791 SO libspdk_event_iobuf.so.3.0 00:03:53.050 SYMLINK libspdk_event_sock.so 00:03:53.050 SYMLINK libspdk_event_vhost_blk.so 00:03:53.050 SYMLINK libspdk_event_keyring.so 00:03:53.050 SYMLINK libspdk_event_scheduler.so 00:03:53.050 SYMLINK libspdk_event_vmd.so 00:03:53.050 SYMLINK libspdk_event_iobuf.so 00:03:53.309 CC module/event/subsystems/accel/accel.o 00:03:53.309 LIB libspdk_event_accel.a 00:03:53.568 SO libspdk_event_accel.so.6.0 00:03:53.568 SYMLINK libspdk_event_accel.so 00:03:53.826 CC module/event/subsystems/bdev/bdev.o 00:03:54.084 LIB libspdk_event_bdev.a 00:03:54.084 SO libspdk_event_bdev.so.6.0 00:03:54.084 SYMLINK libspdk_event_bdev.so 00:03:54.343 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:54.343 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:54.343 CC module/event/subsystems/ublk/ublk.o 00:03:54.343 CC module/event/subsystems/scsi/scsi.o 00:03:54.343 CC module/event/subsystems/nbd/nbd.o 00:03:54.601 LIB libspdk_event_ublk.a 00:03:54.601 LIB libspdk_event_nbd.a 00:03:54.601 LIB libspdk_event_scsi.a 00:03:54.602 SO libspdk_event_ublk.so.3.0 00:03:54.602 SO libspdk_event_nbd.so.6.0 00:03:54.602 SO libspdk_event_scsi.so.6.0 00:03:54.602 SYMLINK libspdk_event_ublk.so 00:03:54.602 SYMLINK libspdk_event_nbd.so 00:03:54.602 LIB libspdk_event_nvmf.a 00:03:54.602 SYMLINK libspdk_event_scsi.so 00:03:54.602 SO libspdk_event_nvmf.so.6.0 00:03:54.861 SYMLINK libspdk_event_nvmf.so 00:03:54.861 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:54.861 CC module/event/subsystems/iscsi/iscsi.o 00:03:55.119 LIB libspdk_event_vhost_scsi.a 00:03:55.119 LIB libspdk_event_iscsi.a 00:03:55.119 SO libspdk_event_vhost_scsi.so.3.0 00:03:55.119 SO libspdk_event_iscsi.so.6.0 00:03:55.119 SYMLINK libspdk_event_vhost_scsi.so 00:03:55.119 SYMLINK libspdk_event_iscsi.so 00:03:55.377 SO libspdk.so.6.0 00:03:55.377 SYMLINK libspdk.so 00:03:55.635 CC app/trace_record/trace_record.o 00:03:55.635 CXX app/trace/trace.o 00:03:55.635 CC app/spdk_lspci/spdk_lspci.o 00:03:55.635 CC app/spdk_nvme_perf/perf.o 00:03:55.635 CC app/spdk_nvme_identify/identify.o 00:03:55.635 CC app/nvmf_tgt/nvmf_main.o 00:03:55.635 CC app/iscsi_tgt/iscsi_tgt.o 00:03:55.635 CC app/spdk_tgt/spdk_tgt.o 00:03:55.635 CC examples/util/zipf/zipf.o 00:03:55.635 CC test/thread/poller_perf/poller_perf.o 00:03:55.894 LINK spdk_lspci 00:03:55.894 LINK nvmf_tgt 00:03:55.894 LINK zipf 00:03:55.894 LINK spdk_trace_record 00:03:55.894 LINK poller_perf 00:03:55.894 LINK iscsi_tgt 00:03:55.894 LINK spdk_tgt 00:03:56.152 LINK spdk_trace 00:03:56.152 CC test/dma/test_dma/test_dma.o 00:03:56.152 TEST_HEADER include/spdk/accel.h 00:03:56.152 TEST_HEADER include/spdk/accel_module.h 00:03:56.152 TEST_HEADER include/spdk/assert.h 00:03:56.152 TEST_HEADER include/spdk/barrier.h 00:03:56.152 TEST_HEADER include/spdk/base64.h 00:03:56.152 TEST_HEADER include/spdk/bdev.h 00:03:56.152 TEST_HEADER include/spdk/bdev_module.h 00:03:56.152 TEST_HEADER include/spdk/bdev_zone.h 00:03:56.152 TEST_HEADER include/spdk/bit_array.h 00:03:56.152 TEST_HEADER include/spdk/bit_pool.h 00:03:56.152 TEST_HEADER include/spdk/blob_bdev.h 00:03:56.152 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:56.152 TEST_HEADER include/spdk/blobfs.h 00:03:56.152 TEST_HEADER include/spdk/blob.h 00:03:56.152 TEST_HEADER include/spdk/conf.h 00:03:56.152 TEST_HEADER include/spdk/config.h 00:03:56.152 TEST_HEADER include/spdk/cpuset.h 00:03:56.152 TEST_HEADER include/spdk/crc16.h 00:03:56.152 TEST_HEADER include/spdk/crc32.h 00:03:56.152 TEST_HEADER include/spdk/crc64.h 00:03:56.152 TEST_HEADER include/spdk/dif.h 00:03:56.152 TEST_HEADER include/spdk/dma.h 00:03:56.152 TEST_HEADER include/spdk/endian.h 00:03:56.152 TEST_HEADER include/spdk/env_dpdk.h 00:03:56.152 TEST_HEADER include/spdk/env.h 00:03:56.152 TEST_HEADER include/spdk/event.h 00:03:56.152 TEST_HEADER include/spdk/fd_group.h 00:03:56.152 TEST_HEADER include/spdk/fd.h 00:03:56.152 TEST_HEADER include/spdk/file.h 00:03:56.152 TEST_HEADER include/spdk/ftl.h 00:03:56.152 TEST_HEADER include/spdk/gpt_spec.h 00:03:56.152 CC examples/ioat/perf/perf.o 00:03:56.152 CC app/spdk_nvme_discover/discovery_aer.o 00:03:56.152 TEST_HEADER include/spdk/hexlify.h 00:03:56.152 TEST_HEADER include/spdk/histogram_data.h 00:03:56.152 TEST_HEADER include/spdk/idxd.h 00:03:56.152 TEST_HEADER include/spdk/idxd_spec.h 00:03:56.152 TEST_HEADER include/spdk/init.h 00:03:56.152 TEST_HEADER include/spdk/ioat.h 00:03:56.411 TEST_HEADER include/spdk/ioat_spec.h 00:03:56.411 TEST_HEADER include/spdk/iscsi_spec.h 00:03:56.411 TEST_HEADER include/spdk/json.h 00:03:56.411 TEST_HEADER include/spdk/jsonrpc.h 00:03:56.411 TEST_HEADER include/spdk/keyring.h 00:03:56.411 TEST_HEADER include/spdk/keyring_module.h 00:03:56.411 TEST_HEADER include/spdk/likely.h 00:03:56.411 TEST_HEADER include/spdk/log.h 00:03:56.411 CC test/app/bdev_svc/bdev_svc.o 00:03:56.411 TEST_HEADER include/spdk/lvol.h 00:03:56.411 TEST_HEADER include/spdk/memory.h 00:03:56.411 TEST_HEADER include/spdk/mmio.h 00:03:56.411 TEST_HEADER include/spdk/nbd.h 00:03:56.411 TEST_HEADER include/spdk/notify.h 00:03:56.411 TEST_HEADER include/spdk/nvme.h 00:03:56.411 TEST_HEADER include/spdk/nvme_intel.h 00:03:56.411 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:56.411 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:56.411 TEST_HEADER include/spdk/nvme_spec.h 00:03:56.411 TEST_HEADER include/spdk/nvme_zns.h 00:03:56.411 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:56.411 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:56.411 TEST_HEADER include/spdk/nvmf.h 00:03:56.411 TEST_HEADER include/spdk/nvmf_spec.h 00:03:56.411 TEST_HEADER include/spdk/nvmf_transport.h 00:03:56.411 TEST_HEADER include/spdk/opal.h 00:03:56.411 TEST_HEADER include/spdk/opal_spec.h 00:03:56.411 TEST_HEADER include/spdk/pci_ids.h 00:03:56.411 TEST_HEADER include/spdk/pipe.h 00:03:56.411 TEST_HEADER include/spdk/queue.h 00:03:56.411 TEST_HEADER include/spdk/reduce.h 00:03:56.411 TEST_HEADER include/spdk/rpc.h 00:03:56.411 TEST_HEADER include/spdk/scheduler.h 00:03:56.411 TEST_HEADER include/spdk/scsi.h 00:03:56.411 TEST_HEADER include/spdk/scsi_spec.h 00:03:56.411 TEST_HEADER include/spdk/sock.h 00:03:56.411 TEST_HEADER include/spdk/stdinc.h 00:03:56.411 TEST_HEADER include/spdk/string.h 00:03:56.411 TEST_HEADER include/spdk/thread.h 00:03:56.411 TEST_HEADER include/spdk/trace.h 00:03:56.411 TEST_HEADER include/spdk/trace_parser.h 00:03:56.411 TEST_HEADER include/spdk/tree.h 00:03:56.411 TEST_HEADER include/spdk/ublk.h 00:03:56.411 CC test/env/vtophys/vtophys.o 00:03:56.411 TEST_HEADER include/spdk/util.h 00:03:56.411 TEST_HEADER include/spdk/uuid.h 00:03:56.411 TEST_HEADER include/spdk/version.h 00:03:56.411 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:56.411 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:56.411 TEST_HEADER include/spdk/vhost.h 00:03:56.411 TEST_HEADER include/spdk/vmd.h 00:03:56.411 TEST_HEADER include/spdk/xor.h 00:03:56.411 TEST_HEADER include/spdk/zipf.h 00:03:56.411 CXX test/cpp_headers/accel.o 00:03:56.411 CC test/env/mem_callbacks/mem_callbacks.o 00:03:56.411 CC examples/vmd/lsvmd/lsvmd.o 00:03:56.411 LINK ioat_perf 00:03:56.411 LINK spdk_nvme_identify 00:03:56.411 LINK spdk_nvme_discover 00:03:56.411 LINK spdk_nvme_perf 00:03:56.411 LINK bdev_svc 00:03:56.670 LINK vtophys 00:03:56.670 LINK lsvmd 00:03:56.670 LINK test_dma 00:03:56.670 CXX test/cpp_headers/accel_module.o 00:03:56.670 CC examples/ioat/verify/verify.o 00:03:56.670 CC app/spdk_top/spdk_top.o 00:03:56.670 CC test/app/histogram_perf/histogram_perf.o 00:03:56.929 CXX test/cpp_headers/assert.o 00:03:56.929 CC examples/vmd/led/led.o 00:03:56.929 CC test/app/jsoncat/jsoncat.o 00:03:56.929 CC test/app/stub/stub.o 00:03:56.929 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:56.929 LINK histogram_perf 00:03:56.929 CC test/event/event_perf/event_perf.o 00:03:56.929 LINK verify 00:03:56.929 CXX test/cpp_headers/barrier.o 00:03:56.929 LINK led 00:03:56.929 LINK jsoncat 00:03:56.929 LINK mem_callbacks 00:03:56.929 LINK stub 00:03:57.187 LINK event_perf 00:03:57.187 CXX test/cpp_headers/base64.o 00:03:57.187 CC test/event/reactor/reactor.o 00:03:57.187 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.187 CC test/env/memory/memory_ut.o 00:03:57.187 LINK nvme_fuzz 00:03:57.187 CC test/env/pci/pci_ut.o 00:03:57.187 CC test/nvme/aer/aer.o 00:03:57.446 CXX test/cpp_headers/bdev.o 00:03:57.446 CC examples/idxd/perf/perf.o 00:03:57.446 CC test/rpc_client/rpc_client_test.o 00:03:57.446 LINK reactor 00:03:57.446 LINK env_dpdk_post_init 00:03:57.446 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:57.446 CXX test/cpp_headers/bdev_module.o 00:03:57.446 LINK rpc_client_test 00:03:57.703 LINK aer 00:03:57.703 CC test/event/reactor_perf/reactor_perf.o 00:03:57.703 LINK spdk_top 00:03:57.703 LINK pci_ut 00:03:57.703 LINK idxd_perf 00:03:57.703 CC test/event/app_repeat/app_repeat.o 00:03:57.703 CXX test/cpp_headers/bdev_zone.o 00:03:57.703 LINK reactor_perf 00:03:57.703 CC test/event/scheduler/scheduler.o 00:03:57.964 LINK app_repeat 00:03:57.964 CC test/nvme/reset/reset.o 00:03:57.964 CC app/vhost/vhost.o 00:03:57.964 CXX test/cpp_headers/bit_array.o 00:03:57.964 CC test/nvme/e2edp/nvme_dp.o 00:03:57.964 CC test/nvme/sgl/sgl.o 00:03:57.964 LINK scheduler 00:03:58.244 CXX test/cpp_headers/bit_pool.o 00:03:58.244 LINK vhost 00:03:58.244 LINK reset 00:03:58.244 CC test/accel/dif/dif.o 00:03:58.244 CXX test/cpp_headers/blob_bdev.o 00:03:58.244 LINK sgl 00:03:58.244 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.244 LINK nvme_dp 00:03:58.517 LINK memory_ut 00:03:58.517 CC app/spdk_dd/spdk_dd.o 00:03:58.517 CXX test/cpp_headers/blobfs.o 00:03:58.517 CC test/blobfs/mkfs/mkfs.o 00:03:58.517 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:58.517 CC test/nvme/overhead/overhead.o 00:03:58.517 CXX test/cpp_headers/blob.o 00:03:58.776 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:58.776 LINK dif 00:03:58.776 LINK mkfs 00:03:58.776 CC test/lvol/esnap/esnap.o 00:03:58.776 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:58.776 CXX test/cpp_headers/conf.o 00:03:58.776 LINK spdk_dd 00:03:58.776 LINK overhead 00:03:58.776 CXX test/cpp_headers/config.o 00:03:59.034 CXX test/cpp_headers/cpuset.o 00:03:59.034 CXX test/cpp_headers/crc16.o 00:03:59.034 LINK interrupt_tgt 00:03:59.292 LINK vhost_fuzz 00:03:59.292 CC test/nvme/err_injection/err_injection.o 00:03:59.292 LINK iscsi_fuzz 00:03:59.292 CXX test/cpp_headers/crc32.o 00:03:59.292 CXX test/cpp_headers/crc64.o 00:03:59.292 CC test/bdev/bdevio/bdevio.o 00:03:59.292 CXX test/cpp_headers/dif.o 00:03:59.292 CC app/fio/nvme/fio_plugin.o 00:03:59.292 LINK err_injection 00:03:59.550 CC examples/thread/thread/thread_ex.o 00:03:59.550 CXX test/cpp_headers/dma.o 00:03:59.550 CXX test/cpp_headers/endian.o 00:03:59.550 CC app/fio/bdev/fio_plugin.o 00:03:59.550 CC test/nvme/startup/startup.o 00:03:59.550 CXX test/cpp_headers/env_dpdk.o 00:03:59.809 CXX test/cpp_headers/env.o 00:03:59.809 LINK bdevio 00:03:59.809 LINK startup 00:03:59.809 LINK thread 00:03:59.809 CXX test/cpp_headers/event.o 00:03:59.809 CXX test/cpp_headers/fd_group.o 00:03:59.809 CC examples/sock/hello_world/hello_sock.o 00:04:00.067 LINK spdk_nvme 00:04:00.067 CC test/nvme/reserve/reserve.o 00:04:00.067 LINK spdk_bdev 00:04:00.067 CC test/nvme/connect_stress/connect_stress.o 00:04:00.067 CC test/nvme/simple_copy/simple_copy.o 00:04:00.067 CXX test/cpp_headers/fd.o 00:04:00.067 LINK hello_sock 00:04:00.067 CC examples/accel/perf/accel_perf.o 00:04:00.326 CXX test/cpp_headers/file.o 00:04:00.326 LINK reserve 00:04:00.326 LINK connect_stress 00:04:00.326 CC examples/blob/hello_world/hello_blob.o 00:04:00.326 LINK simple_copy 00:04:00.326 CXX test/cpp_headers/ftl.o 00:04:00.326 CC examples/blob/cli/blobcli.o 00:04:00.584 CXX test/cpp_headers/gpt_spec.o 00:04:00.584 CC test/nvme/boot_partition/boot_partition.o 00:04:00.584 LINK accel_perf 00:04:00.584 CC test/nvme/compliance/nvme_compliance.o 00:04:00.584 LINK hello_blob 00:04:00.842 CC examples/nvme/hello_world/hello_world.o 00:04:00.842 CC examples/nvme/reconnect/reconnect.o 00:04:00.842 CXX test/cpp_headers/hexlify.o 00:04:00.842 LINK boot_partition 00:04:00.842 LINK blobcli 00:04:00.842 CXX test/cpp_headers/histogram_data.o 00:04:01.100 LINK hello_world 00:04:01.100 LINK nvme_compliance 00:04:01.100 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:01.100 CC examples/nvme/arbitration/arbitration.o 00:04:01.100 CXX test/cpp_headers/idxd.o 00:04:01.357 LINK reconnect 00:04:01.357 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.358 CC examples/bdev/hello_world/hello_bdev.o 00:04:01.358 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.358 CC examples/nvme/hotplug/hotplug.o 00:04:01.358 CXX test/cpp_headers/idxd_spec.o 00:04:01.358 LINK fused_ordering 00:04:01.615 LINK arbitration 00:04:01.615 LINK doorbell_aers 00:04:01.615 CC test/nvme/fdp/fdp.o 00:04:01.615 CXX test/cpp_headers/init.o 00:04:01.615 LINK hello_bdev 00:04:01.615 LINK nvme_manage 00:04:01.615 LINK hotplug 00:04:01.615 CXX test/cpp_headers/ioat.o 00:04:01.873 CXX test/cpp_headers/ioat_spec.o 00:04:01.873 CC test/nvme/cuse/cuse.o 00:04:01.873 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.873 CXX test/cpp_headers/iscsi_spec.o 00:04:01.873 CC examples/nvme/abort/abort.o 00:04:02.131 CXX test/cpp_headers/json.o 00:04:02.131 LINK fdp 00:04:02.131 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.131 CXX test/cpp_headers/jsonrpc.o 00:04:02.131 LINK cmb_copy 00:04:02.131 CXX test/cpp_headers/keyring.o 00:04:02.389 CXX test/cpp_headers/keyring_module.o 00:04:02.389 CXX test/cpp_headers/likely.o 00:04:02.389 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.389 CXX test/cpp_headers/log.o 00:04:02.389 CXX test/cpp_headers/lvol.o 00:04:02.389 LINK abort 00:04:02.647 CXX test/cpp_headers/memory.o 00:04:02.647 CXX test/cpp_headers/mmio.o 00:04:02.647 CXX test/cpp_headers/nbd.o 00:04:02.647 CXX test/cpp_headers/notify.o 00:04:02.648 CXX test/cpp_headers/nvme.o 00:04:02.648 LINK pmr_persistence 00:04:02.648 CXX test/cpp_headers/nvme_intel.o 00:04:02.906 CXX test/cpp_headers/nvme_ocssd.o 00:04:02.906 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:02.906 CXX test/cpp_headers/nvme_spec.o 00:04:02.906 CXX test/cpp_headers/nvme_zns.o 00:04:02.906 LINK bdevperf 00:04:02.906 CXX test/cpp_headers/nvmf_cmd.o 00:04:02.906 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:02.906 CXX test/cpp_headers/nvmf.o 00:04:03.165 CXX test/cpp_headers/nvmf_spec.o 00:04:03.165 CXX test/cpp_headers/nvmf_transport.o 00:04:03.165 CXX test/cpp_headers/opal.o 00:04:03.165 CXX test/cpp_headers/opal_spec.o 00:04:03.165 CXX test/cpp_headers/pci_ids.o 00:04:03.165 CXX test/cpp_headers/pipe.o 00:04:03.423 CXX test/cpp_headers/queue.o 00:04:03.423 CXX test/cpp_headers/reduce.o 00:04:03.423 CXX test/cpp_headers/scheduler.o 00:04:03.423 CXX test/cpp_headers/rpc.o 00:04:03.423 CXX test/cpp_headers/scsi.o 00:04:03.423 CXX test/cpp_headers/scsi_spec.o 00:04:03.423 LINK cuse 00:04:03.423 CXX test/cpp_headers/sock.o 00:04:03.682 CC examples/nvmf/nvmf/nvmf.o 00:04:03.682 CXX test/cpp_headers/stdinc.o 00:04:03.682 CXX test/cpp_headers/string.o 00:04:03.682 CXX test/cpp_headers/thread.o 00:04:03.682 CXX test/cpp_headers/trace.o 00:04:03.682 CXX test/cpp_headers/trace_parser.o 00:04:03.682 CXX test/cpp_headers/tree.o 00:04:03.682 CXX test/cpp_headers/ublk.o 00:04:03.682 CXX test/cpp_headers/util.o 00:04:03.682 CXX test/cpp_headers/uuid.o 00:04:03.682 CXX test/cpp_headers/version.o 00:04:03.682 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.682 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.940 CXX test/cpp_headers/vhost.o 00:04:03.940 CXX test/cpp_headers/vmd.o 00:04:03.940 LINK nvmf 00:04:03.940 CXX test/cpp_headers/xor.o 00:04:03.940 CXX test/cpp_headers/zipf.o 00:04:04.213 LINK esnap 00:04:05.146 00:04:05.146 real 0m55.889s 00:04:05.146 user 5m17.424s 00:04:05.146 sys 1m11.787s 00:04:05.146 06:50:13 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:05.146 06:50:13 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.146 ************************************ 00:04:05.146 END TEST make 00:04:05.146 ************************************ 00:04:05.146 06:50:13 -- common/autotest_common.sh@1142 -- $ return 0 00:04:05.146 06:50:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.147 06:50:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.147 06:50:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.147 06:50:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.147 06:50:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.147 06:50:13 -- pm/common@44 -- $ pid=5932 00:04:05.147 06:50:13 -- pm/common@50 -- $ kill -TERM 5932 00:04:05.147 06:50:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.147 06:50:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.147 06:50:13 -- pm/common@44 -- $ pid=5934 00:04:05.147 06:50:13 -- pm/common@50 -- $ kill -TERM 5934 00:04:05.147 06:50:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:05.147 06:50:13 -- nvmf/common.sh@7 -- # uname -s 00:04:05.147 06:50:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.147 06:50:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.147 06:50:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.147 06:50:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.147 06:50:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.147 06:50:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.147 06:50:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.147 06:50:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.147 06:50:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.147 06:50:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.147 06:50:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:04:05.147 06:50:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:04:05.147 06:50:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.147 06:50:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.147 06:50:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:05.147 06:50:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.147 06:50:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:05.147 06:50:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.147 06:50:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.147 06:50:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.147 06:50:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.147 06:50:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.147 06:50:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.147 06:50:13 -- paths/export.sh@5 -- # export PATH 00:04:05.147 06:50:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.147 06:50:13 -- nvmf/common.sh@47 -- # : 0 00:04:05.147 06:50:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:05.147 06:50:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:05.147 06:50:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.147 06:50:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.147 06:50:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.147 06:50:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:05.147 06:50:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:05.147 06:50:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:05.147 06:50:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.404 06:50:13 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.404 06:50:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.404 06:50:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.404 06:50:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.404 06:50:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.404 06:50:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:05.404 06:50:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.404 06:50:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.404 06:50:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.404 06:50:13 -- spdk/autotest.sh@48 -- # udevadm_pid=67111 00:04:05.404 06:50:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.404 06:50:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.404 06:50:13 -- pm/common@17 -- # local monitor 00:04:05.404 06:50:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.404 06:50:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.404 06:50:13 -- pm/common@25 -- # sleep 1 00:04:05.404 06:50:13 -- pm/common@21 -- # date +%s 00:04:05.404 06:50:13 -- pm/common@21 -- # date +%s 00:04:05.404 06:50:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720853413 00:04:05.404 06:50:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720853413 00:04:05.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720853413_collect-vmstat.pm.log 00:04:05.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720853413_collect-cpu-load.pm.log 00:04:06.338 06:50:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.339 06:50:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.339 06:50:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.339 06:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:06.339 06:50:14 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.339 06:50:14 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:06.339 06:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:06.339 06:50:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:06.339 06:50:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:06.339 06:50:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:06.339 06:50:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:06.339 06:50:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:06.339 06:50:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:06.339 06:50:14 -- common/autotest_common.sh@1455 -- # uname 00:04:06.339 06:50:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:06.339 06:50:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:06.339 06:50:14 -- common/autotest_common.sh@1475 -- # uname 00:04:06.339 06:50:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:06.339 06:50:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:06.339 06:50:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:06.339 06:50:14 -- spdk/autotest.sh@72 -- # hash lcov 00:04:06.339 06:50:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:06.339 06:50:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:06.339 --rc lcov_branch_coverage=1 00:04:06.339 --rc lcov_function_coverage=1 00:04:06.339 --rc genhtml_branch_coverage=1 00:04:06.339 --rc genhtml_function_coverage=1 00:04:06.339 --rc genhtml_legend=1 00:04:06.339 --rc geninfo_all_blocks=1 00:04:06.339 ' 00:04:06.339 06:50:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:06.339 --rc lcov_branch_coverage=1 00:04:06.339 --rc lcov_function_coverage=1 00:04:06.339 --rc genhtml_branch_coverage=1 00:04:06.339 --rc genhtml_function_coverage=1 00:04:06.339 --rc genhtml_legend=1 00:04:06.339 --rc geninfo_all_blocks=1 00:04:06.339 ' 00:04:06.339 06:50:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:06.339 --rc lcov_branch_coverage=1 00:04:06.339 --rc lcov_function_coverage=1 00:04:06.339 --rc genhtml_branch_coverage=1 00:04:06.339 --rc genhtml_function_coverage=1 00:04:06.339 --rc genhtml_legend=1 00:04:06.339 --rc geninfo_all_blocks=1 00:04:06.339 --no-external' 00:04:06.339 06:50:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:06.339 --rc lcov_branch_coverage=1 00:04:06.339 --rc lcov_function_coverage=1 00:04:06.339 --rc genhtml_branch_coverage=1 00:04:06.339 --rc genhtml_function_coverage=1 00:04:06.339 --rc genhtml_legend=1 00:04:06.339 --rc geninfo_all_blocks=1 00:04:06.339 --no-external' 00:04:06.339 06:50:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:06.597 lcov: LCOV version 1.14 00:04:06.597 06:50:14 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:21.475 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.475 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:31.471 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:31.472 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:31.472 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:31.473 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:31.473 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:34.782 06:50:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:34.782 06:50:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.782 06:50:42 -- common/autotest_common.sh@10 -- # set +x 00:04:34.782 06:50:42 -- spdk/autotest.sh@91 -- # rm -f 00:04:34.782 06:50:42 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.048 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:35.048 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:35.048 06:50:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:35.048 06:50:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:35.048 06:50:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:35.048 06:50:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:35.048 06:50:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.048 06:50:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:35.048 06:50:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:35.048 06:50:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.049 06:50:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:35.049 06:50:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:35.049 06:50:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.049 06:50:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:35.049 06:50:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:35.049 06:50:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.049 06:50:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:35.049 06:50:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:35.049 06:50:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:35.049 06:50:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.049 06:50:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:35.049 06:50:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.049 06:50:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.049 06:50:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:35.049 06:50:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:35.049 06:50:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.306 No valid GPT data, bailing 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # pt= 00:04:35.306 06:50:43 -- scripts/common.sh@392 -- # return 1 00:04:35.306 06:50:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.306 1+0 records in 00:04:35.306 1+0 records out 00:04:35.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00333397 s, 315 MB/s 00:04:35.306 06:50:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.306 06:50:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.306 06:50:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:35.306 06:50:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:35.306 06:50:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:35.306 No valid GPT data, bailing 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # pt= 00:04:35.306 06:50:43 -- scripts/common.sh@392 -- # return 1 00:04:35.306 06:50:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:35.306 1+0 records in 00:04:35.306 1+0 records out 00:04:35.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058151 s, 180 MB/s 00:04:35.306 06:50:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.306 06:50:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.306 06:50:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:35.306 06:50:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:35.306 06:50:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:35.306 No valid GPT data, bailing 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # pt= 00:04:35.306 06:50:43 -- scripts/common.sh@392 -- # return 1 00:04:35.306 06:50:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:35.306 1+0 records in 00:04:35.306 1+0 records out 00:04:35.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371038 s, 283 MB/s 00:04:35.306 06:50:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.306 06:50:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.306 06:50:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:35.306 06:50:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:35.306 06:50:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:35.306 No valid GPT data, bailing 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:35.306 06:50:43 -- scripts/common.sh@391 -- # pt= 00:04:35.306 06:50:43 -- scripts/common.sh@392 -- # return 1 00:04:35.306 06:50:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:35.563 1+0 records in 00:04:35.563 1+0 records out 00:04:35.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045505 s, 230 MB/s 00:04:35.563 06:50:43 -- spdk/autotest.sh@118 -- # sync 00:04:35.563 06:50:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.563 06:50:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.563 06:50:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.461 06:50:45 -- spdk/autotest.sh@124 -- # uname -s 00:04:37.461 06:50:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:37.461 06:50:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.461 06:50:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.461 06:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.461 06:50:45 -- common/autotest_common.sh@10 -- # set +x 00:04:37.461 ************************************ 00:04:37.461 START TEST setup.sh 00:04:37.461 ************************************ 00:04:37.461 06:50:45 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.461 * Looking for test storage... 00:04:37.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.461 06:50:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:37.461 06:50:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.461 06:50:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.461 06:50:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.461 06:50:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.461 06:50:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.461 ************************************ 00:04:37.461 START TEST acl 00:04:37.461 ************************************ 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.461 * Looking for test storage... 00:04:37.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.461 06:50:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.461 06:50:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.461 06:50:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:37.461 06:50:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:37.461 06:50:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:37.461 06:50:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.461 06:50:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:37.461 06:50:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.461 06:50:45 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.396 06:50:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:38.396 06:50:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:38.396 06:50:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.396 06:50:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:38.396 06:50:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.396 06:50:46 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.963 Hugepages 00:04:38.963 node hugesize free / total 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.963 00:04:38.963 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:38.963 06:50:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.963 06:50:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:38.963 06:50:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:38.963 06:50:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:38.963 06:50:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:39.222 06:50:47 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:39.222 06:50:47 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.222 06:50:47 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.222 06:50:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.222 ************************************ 00:04:39.222 START TEST denied 00:04:39.222 ************************************ 00:04:39.222 06:50:47 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:39.222 06:50:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:39.222 06:50:47 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:39.222 06:50:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:39.222 06:50:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.222 06:50:47 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.158 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:40.158 06:50:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:40.158 06:50:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:40.158 06:50:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:40.158 06:50:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:40.159 06:50:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:40.159 06:50:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:40.159 06:50:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:40.159 06:50:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:40.159 06:50:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.159 06:50:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.725 00:04:40.725 real 0m1.435s 00:04:40.725 user 0m0.558s 00:04:40.725 sys 0m0.811s 00:04:40.725 06:50:48 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.725 06:50:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:40.725 ************************************ 00:04:40.725 END TEST denied 00:04:40.725 ************************************ 00:04:40.725 06:50:48 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:40.725 06:50:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:40.725 06:50:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.725 06:50:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.725 06:50:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:40.725 ************************************ 00:04:40.725 START TEST allowed 00:04:40.725 ************************************ 00:04:40.725 06:50:48 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:40.725 06:50:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:40.725 06:50:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:40.725 06:50:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:40.725 06:50:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.725 06:50:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.660 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.660 06:50:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.227 00:04:42.227 real 0m1.501s 00:04:42.227 user 0m0.690s 00:04:42.227 sys 0m0.793s 00:04:42.227 06:50:50 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.227 06:50:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:42.227 ************************************ 00:04:42.227 END TEST allowed 00:04:42.227 ************************************ 00:04:42.227 06:50:50 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:42.227 00:04:42.227 real 0m4.777s 00:04:42.227 user 0m2.119s 00:04:42.227 sys 0m2.575s 00:04:42.227 06:50:50 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.227 06:50:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:42.227 ************************************ 00:04:42.227 END TEST acl 00:04:42.227 ************************************ 00:04:42.227 06:50:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.227 06:50:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.227 06:50:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.227 06:50:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.227 06:50:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.227 ************************************ 00:04:42.227 START TEST hugepages 00:04:42.227 ************************************ 00:04:42.227 06:50:50 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.486 * Looking for test storage... 00:04:42.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 4461176 kB' 'MemAvailable: 7399604 kB' 'Buffers: 2436 kB' 'Cached: 3139288 kB' 'SwapCached: 0 kB' 'Active: 476344 kB' 'Inactive: 2769008 kB' 'Active(anon): 114120 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105536 kB' 'Mapped: 48624 kB' 'Shmem: 10492 kB' 'KReclaimable: 88272 kB' 'Slab: 168244 kB' 'SReclaimable: 88272 kB' 'SUnreclaim: 79972 kB' 'KernelStack: 6592 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.486 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.487 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.488 06:50:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:42.488 06:50:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.488 06:50:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.488 06:50:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.488 ************************************ 00:04:42.488 START TEST default_setup 00:04:42.488 ************************************ 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.488 06:50:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.065 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.344 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555020 kB' 'MemAvailable: 9493344 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493444 kB' 'Inactive: 2769024 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48676 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167996 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79968 kB' 'KernelStack: 6560 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.344 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555456 kB' 'MemAvailable: 9493780 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493360 kB' 'Inactive: 2769024 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122244 kB' 'Mapped: 48676 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167968 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79940 kB' 'KernelStack: 6528 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.345 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.346 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555456 kB' 'MemAvailable: 9493780 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493444 kB' 'Inactive: 2769024 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167968 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79940 kB' 'KernelStack: 6544 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.347 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.348 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:43.349 nr_hugepages=1024 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.349 resv_hugepages=0 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.349 surplus_hugepages=0 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.349 anon_hugepages=0 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555456 kB' 'MemAvailable: 9493780 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493344 kB' 'Inactive: 2769024 kB' 'Active(anon): 131120 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122260 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167960 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79932 kB' 'KernelStack: 6544 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.349 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555920 kB' 'MemUsed: 5686048 kB' 'SwapCached: 0 kB' 'Active: 493292 kB' 'Inactive: 2769024 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3141716 kB' 'Mapped: 48548 kB' 'AnonPages: 122220 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88028 kB' 'Slab: 167952 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.352 node0=1024 expecting 1024 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.352 00:04:43.352 real 0m0.992s 00:04:43.352 user 0m0.444s 00:04:43.352 sys 0m0.477s 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.352 06:50:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:43.352 ************************************ 00:04:43.352 END TEST default_setup 00:04:43.352 ************************************ 00:04:43.352 06:50:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:43.352 06:50:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.352 06:50:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.352 06:50:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.352 06:50:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.611 ************************************ 00:04:43.611 START TEST per_node_1G_alloc 00:04:43.611 ************************************ 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.611 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.874 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.874 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7601024 kB' 'MemAvailable: 10539348 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493412 kB' 'Inactive: 2769024 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48664 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167936 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79908 kB' 'KernelStack: 6516 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.874 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7601024 kB' 'MemAvailable: 10539348 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493280 kB' 'Inactive: 2769024 kB' 'Active(anon): 131056 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122160 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167956 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79928 kB' 'KernelStack: 6560 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.875 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.876 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7601024 kB' 'MemAvailable: 10539348 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493348 kB' 'Inactive: 2769024 kB' 'Active(anon): 131124 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122228 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167956 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79928 kB' 'KernelStack: 6544 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.877 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.878 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.879 nr_hugepages=512 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:43.879 resv_hugepages=0 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.879 surplus_hugepages=0 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.879 anon_hugepages=0 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7601024 kB' 'MemAvailable: 10539348 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493340 kB' 'Inactive: 2769024 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167956 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79928 kB' 'KernelStack: 6560 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.879 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.880 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7601024 kB' 'MemUsed: 4640944 kB' 'SwapCached: 0 kB' 'Active: 493296 kB' 'Inactive: 2769024 kB' 'Active(anon): 131072 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3141716 kB' 'Mapped: 48548 kB' 'AnonPages: 122228 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88028 kB' 'Slab: 167948 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.881 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.139 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.140 node0=512 expecting 512 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.140 00:04:44.140 real 0m0.550s 00:04:44.140 user 0m0.313s 00:04:44.140 sys 0m0.272s 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.140 06:50:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.140 ************************************ 00:04:44.140 END TEST per_node_1G_alloc 00:04:44.140 ************************************ 00:04:44.140 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:44.140 06:50:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:44.140 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.140 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.140 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.140 ************************************ 00:04:44.140 START TEST even_2G_alloc 00:04:44.140 ************************************ 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.140 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.402 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.402 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6561328 kB' 'MemAvailable: 9499652 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 494016 kB' 'Inactive: 2769024 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167904 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79876 kB' 'KernelStack: 6544 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.402 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6561416 kB' 'MemAvailable: 9499740 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493444 kB' 'Inactive: 2769024 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167892 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79864 kB' 'KernelStack: 6512 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.403 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.404 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6561416 kB' 'MemAvailable: 9499740 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493444 kB' 'Inactive: 2769024 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167892 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79864 kB' 'KernelStack: 6580 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.405 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.406 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.407 nr_hugepages=1024 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.407 resv_hugepages=0 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.407 surplus_hugepages=0 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.407 anon_hugepages=0 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6561416 kB' 'MemAvailable: 9499740 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493528 kB' 'Inactive: 2769024 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122420 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167904 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79876 kB' 'KernelStack: 6588 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.407 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.668 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6561416 kB' 'MemUsed: 5680552 kB' 'SwapCached: 0 kB' 'Active: 493572 kB' 'Inactive: 2769024 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3141716 kB' 'Mapped: 48548 kB' 'AnonPages: 122552 kB' 'Shmem: 10468 kB' 'KernelStack: 6572 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88028 kB' 'Slab: 167900 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.669 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.670 node0=1024 expecting 1024 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.670 00:04:44.670 real 0m0.497s 00:04:44.670 user 0m0.238s 00:04:44.670 sys 0m0.293s 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.670 06:50:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.670 ************************************ 00:04:44.670 END TEST even_2G_alloc 00:04:44.670 ************************************ 00:04:44.670 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:44.670 06:50:52 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:44.670 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.670 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.670 06:50:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.670 ************************************ 00:04:44.670 START TEST odd_alloc 00:04:44.670 ************************************ 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.670 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.671 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.940 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.940 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6558024 kB' 'MemAvailable: 9496348 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493984 kB' 'Inactive: 2769024 kB' 'Active(anon): 131760 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122564 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167912 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79884 kB' 'KernelStack: 6560 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.940 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6557772 kB' 'MemAvailable: 9496096 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493484 kB' 'Inactive: 2769024 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167928 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79900 kB' 'KernelStack: 6588 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.941 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.942 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6558204 kB' 'MemAvailable: 9496528 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493492 kB' 'Inactive: 2769024 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122348 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167928 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79900 kB' 'KernelStack: 6556 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.228 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.229 nr_hugepages=1025 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:45.229 resv_hugepages=0 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.229 surplus_hugepages=0 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.229 anon_hugepages=0 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.229 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6558204 kB' 'MemAvailable: 9496528 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493540 kB' 'Inactive: 2769024 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122396 kB' 'Mapped: 48548 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167912 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79884 kB' 'KernelStack: 6572 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.230 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6558204 kB' 'MemUsed: 5683764 kB' 'SwapCached: 0 kB' 'Active: 493628 kB' 'Inactive: 2769024 kB' 'Active(anon): 131404 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3141716 kB' 'Mapped: 48548 kB' 'AnonPages: 122500 kB' 'Shmem: 10468 kB' 'KernelStack: 6588 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88028 kB' 'Slab: 167908 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.231 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.232 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.233 node0=1025 expecting 1025 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:45.233 00:04:45.233 real 0m0.538s 00:04:45.233 user 0m0.283s 00:04:45.233 sys 0m0.286s 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.233 06:50:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.233 ************************************ 00:04:45.233 END TEST odd_alloc 00:04:45.233 ************************************ 00:04:45.233 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:45.233 06:50:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:45.233 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.233 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.233 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.233 ************************************ 00:04:45.233 START TEST custom_alloc 00:04:45.233 ************************************ 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.233 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.492 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.492 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.492 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7605916 kB' 'MemAvailable: 10544240 kB' 'Buffers: 2436 kB' 'Cached: 3139280 kB' 'SwapCached: 0 kB' 'Active: 493732 kB' 'Inactive: 2769024 kB' 'Active(anon): 131508 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769024 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122928 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167912 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79884 kB' 'KernelStack: 6596 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.493 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.494 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.756 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7606144 kB' 'MemAvailable: 10544472 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493472 kB' 'Inactive: 2769028 kB' 'Active(anon): 131248 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48596 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167900 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79872 kB' 'KernelStack: 6576 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.757 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7606144 kB' 'MemAvailable: 10544472 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493408 kB' 'Inactive: 2769028 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122332 kB' 'Mapped: 48596 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167896 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79868 kB' 'KernelStack: 6528 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.758 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.759 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.760 nr_hugepages=512 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:45.760 resv_hugepages=0 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.760 surplus_hugepages=0 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.760 anon_hugepages=0 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7606144 kB' 'MemAvailable: 10544472 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493364 kB' 'Inactive: 2769028 kB' 'Active(anon): 131140 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48552 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167908 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79880 kB' 'KernelStack: 6544 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.760 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.761 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7606144 kB' 'MemUsed: 4635824 kB' 'SwapCached: 0 kB' 'Active: 493376 kB' 'Inactive: 2769028 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3141720 kB' 'Mapped: 48552 kB' 'AnonPages: 122332 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88028 kB' 'Slab: 167908 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.762 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.763 node0=512 expecting 512 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:45.763 00:04:45.763 real 0m0.539s 00:04:45.763 user 0m0.269s 00:04:45.763 sys 0m0.304s 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.763 06:50:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.763 ************************************ 00:04:45.763 END TEST custom_alloc 00:04:45.763 ************************************ 00:04:45.763 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:45.763 06:50:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:45.763 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.763 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.763 06:50:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.763 ************************************ 00:04:45.763 START TEST no_shrink_alloc 00:04:45.763 ************************************ 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.763 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.764 06:50:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.285 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.285 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555336 kB' 'MemAvailable: 9493664 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493724 kB' 'Inactive: 2769028 kB' 'Active(anon): 131500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167908 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79880 kB' 'KernelStack: 6560 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.285 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.286 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555336 kB' 'MemAvailable: 9493664 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493860 kB' 'Inactive: 2769028 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122556 kB' 'Mapped: 48676 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167884 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79856 kB' 'KernelStack: 6528 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.287 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555336 kB' 'MemAvailable: 9493664 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493016 kB' 'Inactive: 2769028 kB' 'Active(anon): 130792 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122232 kB' 'Mapped: 48552 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167884 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79856 kB' 'KernelStack: 6544 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.288 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.289 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.290 nr_hugepages=1024 00:04:46.290 resv_hugepages=0 00:04:46.290 surplus_hugepages=0 00:04:46.290 anon_hugepages=0 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.290 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555336 kB' 'MemAvailable: 9493664 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 493096 kB' 'Inactive: 2769028 kB' 'Active(anon): 130872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122076 kB' 'Mapped: 48552 kB' 'Shmem: 10468 kB' 'KReclaimable: 88028 kB' 'Slab: 167880 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79852 kB' 'KernelStack: 6576 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.291 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6555336 kB' 'MemUsed: 5686632 kB' 'SwapCached: 0 kB' 'Active: 493084 kB' 'Inactive: 2769028 kB' 'Active(anon): 130860 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 3141720 kB' 'Mapped: 48552 kB' 'AnonPages: 122320 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88028 kB' 'Slab: 167880 kB' 'SReclaimable: 88028 kB' 'SUnreclaim: 79852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.292 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.293 node0=1024 expecting 1024 00:04:46.293 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.294 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.294 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:46.294 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:46.294 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:46.294 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.294 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.865 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.865 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.865 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.865 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6557084 kB' 'MemAvailable: 9495408 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 489768 kB' 'Inactive: 2769028 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118612 kB' 'Mapped: 47840 kB' 'Shmem: 10468 kB' 'KReclaimable: 88024 kB' 'Slab: 167848 kB' 'SReclaimable: 88024 kB' 'SUnreclaim: 79824 kB' 'KernelStack: 6484 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.866 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6557084 kB' 'MemAvailable: 9495408 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 489164 kB' 'Inactive: 2769028 kB' 'Active(anon): 126940 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118044 kB' 'Mapped: 47812 kB' 'Shmem: 10468 kB' 'KReclaimable: 88024 kB' 'Slab: 167820 kB' 'SReclaimable: 88024 kB' 'SUnreclaim: 79796 kB' 'KernelStack: 6448 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.867 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.868 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6557084 kB' 'MemAvailable: 9495408 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 489212 kB' 'Inactive: 2769028 kB' 'Active(anon): 126988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118140 kB' 'Mapped: 47812 kB' 'Shmem: 10468 kB' 'KReclaimable: 88024 kB' 'Slab: 167812 kB' 'SReclaimable: 88024 kB' 'SUnreclaim: 79788 kB' 'KernelStack: 6464 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.869 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.870 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.871 nr_hugepages=1024 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.871 resv_hugepages=0 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.871 surplus_hugepages=0 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.871 anon_hugepages=0 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6557084 kB' 'MemAvailable: 9495408 kB' 'Buffers: 2436 kB' 'Cached: 3139284 kB' 'SwapCached: 0 kB' 'Active: 489152 kB' 'Inactive: 2769028 kB' 'Active(anon): 126928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118036 kB' 'Mapped: 47812 kB' 'Shmem: 10468 kB' 'KReclaimable: 88024 kB' 'Slab: 167808 kB' 'SReclaimable: 88024 kB' 'SUnreclaim: 79784 kB' 'KernelStack: 6448 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.871 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.872 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6556832 kB' 'MemUsed: 5685136 kB' 'SwapCached: 0 kB' 'Active: 489224 kB' 'Inactive: 2769028 kB' 'Active(anon): 127000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2769028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 3141720 kB' 'Mapped: 47812 kB' 'AnonPages: 118148 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88024 kB' 'Slab: 167808 kB' 'SReclaimable: 88024 kB' 'SUnreclaim: 79784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.873 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.874 node0=1024 expecting 1024 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.874 00:04:46.874 real 0m1.148s 00:04:46.874 user 0m0.563s 00:04:46.874 sys 0m0.597s 00:04:46.874 ************************************ 00:04:46.874 END TEST no_shrink_alloc 00:04:46.874 ************************************ 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.874 06:50:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.132 06:50:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.132 06:50:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.132 00:04:47.132 real 0m4.715s 00:04:47.132 user 0m2.276s 00:04:47.132 sys 0m2.488s 00:04:47.132 ************************************ 00:04:47.132 END TEST hugepages 00:04:47.132 ************************************ 00:04:47.132 06:50:54 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.132 06:50:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.132 06:50:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:47.132 06:50:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:47.132 06:50:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.132 06:50:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.132 06:50:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.132 ************************************ 00:04:47.132 START TEST driver 00:04:47.132 ************************************ 00:04:47.132 06:50:55 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:47.132 * Looking for test storage... 00:04:47.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.132 06:50:55 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:47.132 06:50:55 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.132 06:50:55 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.698 06:50:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:47.698 06:50:55 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.698 06:50:55 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.698 06:50:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.698 ************************************ 00:04:47.698 START TEST guess_driver 00:04:47.698 ************************************ 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:47.698 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:47.698 Looking for driver=uio_pci_generic 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.698 06:50:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.264 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:48.264 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:48.264 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.522 06:50:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.088 00:04:49.088 real 0m1.374s 00:04:49.088 user 0m0.531s 00:04:49.088 sys 0m0.854s 00:04:49.088 06:50:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.088 06:50:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:49.088 ************************************ 00:04:49.088 END TEST guess_driver 00:04:49.088 ************************************ 00:04:49.088 06:50:57 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:49.088 ************************************ 00:04:49.088 END TEST driver 00:04:49.088 ************************************ 00:04:49.088 00:04:49.088 real 0m2.081s 00:04:49.088 user 0m0.794s 00:04:49.088 sys 0m1.350s 00:04:49.088 06:50:57 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.088 06:50:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:49.088 06:50:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:49.088 06:50:57 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.088 06:50:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.088 06:50:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.088 06:50:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.088 ************************************ 00:04:49.088 START TEST devices 00:04:49.088 ************************************ 00:04:49.088 06:50:57 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.346 * Looking for test storage... 00:04:49.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.346 06:50:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:49.346 06:50:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:49.346 06:50:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.346 06:50:57 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.914 06:50:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:49.914 06:50:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.914 06:50:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:49.914 06:50:57 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:50.173 No valid GPT data, bailing 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:50.173 No valid GPT data, bailing 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:50.173 No valid GPT data, bailing 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:50.173 06:50:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:50.173 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:50.173 No valid GPT data, bailing 00:04:50.173 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:50.432 06:50:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:50.432 06:50:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:50.432 06:50:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:50.432 06:50:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:50.432 06:50:58 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:50.432 06:50:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:50.432 06:50:58 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.432 06:50:58 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.432 06:50:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.432 ************************************ 00:04:50.432 START TEST nvme_mount 00:04:50.432 ************************************ 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.432 06:50:58 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:51.368 Creating new GPT entries in memory. 00:04:51.368 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.368 other utilities. 00:04:51.368 06:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.368 06:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.368 06:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.368 06:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.368 06:50:59 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:52.373 Creating new GPT entries in memory. 00:04:52.373 The operation has completed successfully. 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71290 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.373 06:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.631 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.631 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.631 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.631 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.631 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.631 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.890 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.890 06:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.148 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.148 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.148 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.148 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:53.148 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.149 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.149 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.407 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.407 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.407 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.407 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.407 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.407 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.666 06:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.924 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.924 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:53.924 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.924 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.924 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.924 06:51:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.183 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.183 00:04:54.183 real 0m3.972s 00:04:54.183 user 0m0.700s 00:04:54.183 sys 0m1.018s 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.183 06:51:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:54.183 ************************************ 00:04:54.183 END TEST nvme_mount 00:04:54.183 ************************************ 00:04:54.441 06:51:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:54.441 06:51:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:54.441 06:51:02 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.441 06:51:02 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.441 06:51:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.441 ************************************ 00:04:54.441 START TEST dm_mount 00:04:54.441 ************************************ 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.441 06:51:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:55.374 Creating new GPT entries in memory. 00:04:55.374 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.374 other utilities. 00:04:55.374 06:51:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.374 06:51:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.374 06:51:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.374 06:51:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.374 06:51:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:56.309 Creating new GPT entries in memory. 00:04:56.309 The operation has completed successfully. 00:04:56.309 06:51:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.309 06:51:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.309 06:51:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.309 06:51:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.309 06:51:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:57.685 The operation has completed successfully. 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71719 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:57.685 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.943 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:57.943 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.943 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:57.943 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.944 06:51:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.202 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.202 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:58.202 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.202 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.202 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.202 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:58.461 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:58.461 00:04:58.461 real 0m4.198s 00:04:58.461 user 0m0.440s 00:04:58.461 sys 0m0.711s 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.461 06:51:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:58.461 ************************************ 00:04:58.461 END TEST dm_mount 00:04:58.461 ************************************ 00:04:58.461 06:51:06 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.461 06:51:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.029 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:59.029 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:59.029 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.029 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.029 06:51:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:59.029 00:04:59.029 real 0m9.684s 00:04:59.029 user 0m1.827s 00:04:59.029 sys 0m2.270s 00:04:59.029 06:51:06 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.029 06:51:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:59.029 ************************************ 00:04:59.029 END TEST devices 00:04:59.029 ************************************ 00:04:59.029 06:51:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:59.029 00:04:59.029 real 0m21.555s 00:04:59.029 user 0m7.128s 00:04:59.029 sys 0m8.854s 00:04:59.029 06:51:06 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.029 06:51:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.029 ************************************ 00:04:59.029 END TEST setup.sh 00:04:59.029 ************************************ 00:04:59.029 06:51:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.029 06:51:06 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:59.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.596 Hugepages 00:04:59.596 node hugesize free / total 00:04:59.596 node0 1048576kB 0 / 0 00:04:59.596 node0 2048kB 2048 / 2048 00:04:59.596 00:04:59.596 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.596 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:59.855 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:59.855 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:59.855 06:51:07 -- spdk/autotest.sh@130 -- # uname -s 00:04:59.855 06:51:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:59.855 06:51:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:59.855 06:51:07 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.681 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.681 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.681 06:51:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:01.614 06:51:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:01.614 06:51:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:01.614 06:51:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.614 06:51:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:01.614 06:51:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:01.614 06:51:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:01.614 06:51:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.614 06:51:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:01.614 06:51:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:01.614 06:51:09 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:01.614 06:51:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:01.614 06:51:09 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.181 Waiting for block devices as requested 00:05:02.181 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.181 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.181 06:51:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:02.181 06:51:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:02.181 06:51:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:02.181 06:51:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:02.181 06:51:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:02.181 06:51:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:02.181 06:51:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:02.181 06:51:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:02.181 06:51:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:02.181 06:51:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:02.181 06:51:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:02.181 06:51:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:02.181 06:51:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:02.181 06:51:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:02.181 06:51:10 -- common/autotest_common.sh@1557 -- # continue 00:05:02.181 06:51:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:02.181 06:51:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:02.181 06:51:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.181 06:51:10 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:02.181 06:51:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:02.181 06:51:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:02.181 06:51:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:02.441 06:51:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:02.441 06:51:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:02.441 06:51:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:02.441 06:51:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:02.441 06:51:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:02.441 06:51:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:02.441 06:51:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:02.441 06:51:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:02.441 06:51:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:02.441 06:51:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:02.441 06:51:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:02.441 06:51:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:02.441 06:51:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:02.441 06:51:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:02.441 06:51:10 -- common/autotest_common.sh@1557 -- # continue 00:05:02.441 06:51:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:02.441 06:51:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.441 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 06:51:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:02.441 06:51:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.441 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 06:51:10 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.007 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.265 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.265 06:51:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:03.265 06:51:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.265 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 06:51:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:03.265 06:51:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:03.265 06:51:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.265 06:51:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:03.265 06:51:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:03.265 06:51:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:03.265 06:51:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:03.265 06:51:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:03.265 06:51:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.265 06:51:11 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.265 06:51:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:03.265 06:51:11 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:03.265 06:51:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:03.265 06:51:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:03.265 06:51:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:03.265 06:51:11 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:03.265 06:51:11 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.265 06:51:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:03.265 06:51:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:03.265 06:51:11 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:03.265 06:51:11 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.265 06:51:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:03.265 06:51:11 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:03.265 06:51:11 -- common/autotest_common.sh@1593 -- # return 0 00:05:03.265 06:51:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:03.265 06:51:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:03.265 06:51:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:03.265 06:51:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:03.265 06:51:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:03.265 06:51:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.265 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 06:51:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:03.265 06:51:11 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.265 06:51:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.265 06:51:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.265 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.265 ************************************ 00:05:03.265 START TEST env 00:05:03.265 ************************************ 00:05:03.265 06:51:11 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.265 * Looking for test storage... 00:05:03.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:03.523 06:51:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.523 06:51:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.523 06:51:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.523 06:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.523 ************************************ 00:05:03.523 START TEST env_memory 00:05:03.523 ************************************ 00:05:03.523 06:51:11 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.523 00:05:03.523 00:05:03.523 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.523 http://cunit.sourceforge.net/ 00:05:03.523 00:05:03.523 00:05:03.523 Suite: memory 00:05:03.523 Test: alloc and free memory map ...[2024-07-13 06:51:11.395746] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.523 passed 00:05:03.523 Test: mem map translation ...[2024-07-13 06:51:11.420188] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.523 [2024-07-13 06:51:11.420235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.523 [2024-07-13 06:51:11.420281] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.523 [2024-07-13 06:51:11.420295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.523 passed 00:05:03.523 Test: mem map registration ...[2024-07-13 06:51:11.473182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:03.523 [2024-07-13 06:51:11.473231] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:03.523 passed 00:05:03.523 Test: mem map adjacent registrations ...passed 00:05:03.523 00:05:03.523 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.523 suites 1 1 n/a 0 0 00:05:03.523 tests 4 4 4 0 0 00:05:03.523 asserts 152 152 152 0 n/a 00:05:03.523 00:05:03.523 Elapsed time = 0.172 seconds 00:05:03.523 00:05:03.523 real 0m0.189s 00:05:03.523 user 0m0.170s 00:05:03.523 sys 0m0.015s 00:05:03.523 06:51:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.523 06:51:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.523 ************************************ 00:05:03.523 END TEST env_memory 00:05:03.523 ************************************ 00:05:03.523 06:51:11 env -- common/autotest_common.sh@1142 -- # return 0 00:05:03.523 06:51:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.523 06:51:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.523 06:51:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.523 06:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.523 ************************************ 00:05:03.523 START TEST env_vtophys 00:05:03.523 ************************************ 00:05:03.523 06:51:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.781 EAL: lib.eal log level changed from notice to debug 00:05:03.781 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 1 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 2 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 3 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 4 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 5 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 6 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 7 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 8 as core 0 on socket 0 00:05:03.781 EAL: Detected lcore 9 as core 0 on socket 0 00:05:03.781 EAL: Maximum logical cores by configuration: 128 00:05:03.781 EAL: Detected CPU lcores: 10 00:05:03.781 EAL: Detected NUMA nodes: 1 00:05:03.781 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:03.781 EAL: Detected shared linkage of DPDK 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:03.781 EAL: Registered [vdev] bus. 00:05:03.781 EAL: bus.vdev log level changed from disabled to notice 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:03.781 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:03.781 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:03.781 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:03.781 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Selected IOVA mode 'PA' 00:05:03.781 EAL: Probing VFIO support... 00:05:03.781 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.781 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:03.781 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.781 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.781 EAL: Setting up physically contiguous memory... 00:05:03.781 EAL: Setting maximum number of open files to 524288 00:05:03.781 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.781 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.781 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.781 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.781 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.781 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.781 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.781 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.781 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.781 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.781 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.781 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.781 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.781 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.781 EAL: Hugepages will be freed exactly as allocated. 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: TSC frequency is ~2200000 KHz 00:05:03.781 EAL: Main lcore 0 is ready (tid=7f8d49feca00;cpuset=[0]) 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 0 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.781 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.781 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.781 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:03.781 00:05:03.781 00:05:03.781 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.781 http://cunit.sourceforge.net/ 00:05:03.781 00:05:03.781 00:05:03.781 Suite: components_suite 00:05:03.781 Test: vtophys_malloc_test ...passed 00:05:03.781 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.781 EAL: Trying to obtain current memory policy. 00:05:03.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.781 EAL: Restoring previous memory policy: 4 00:05:03.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.781 EAL: request: mp_malloc_sync 00:05:03.781 EAL: No shared files mode enabled, IPC is disabled 00:05:03.781 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.039 EAL: request: mp_malloc_sync 00:05:04.039 EAL: No shared files mode enabled, IPC is disabled 00:05:04.039 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.039 EAL: Trying to obtain current memory policy. 00:05:04.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.039 EAL: Restoring previous memory policy: 4 00:05:04.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.039 EAL: request: mp_malloc_sync 00:05:04.039 EAL: No shared files mode enabled, IPC is disabled 00:05:04.039 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.039 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.039 EAL: request: mp_malloc_sync 00:05:04.039 EAL: No shared files mode enabled, IPC is disabled 00:05:04.039 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.039 EAL: Trying to obtain current memory policy. 00:05:04.039 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.297 EAL: Restoring previous memory policy: 4 00:05:04.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.297 EAL: request: mp_malloc_sync 00:05:04.297 EAL: No shared files mode enabled, IPC is disabled 00:05:04.297 EAL: Heap on socket 0 was expanded by 514MB 00:05:04.297 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.555 EAL: request: mp_malloc_sync 00:05:04.555 EAL: No shared files mode enabled, IPC is disabled 00:05:04.555 EAL: Heap on socket 0 was shrunk by 514MB 00:05:04.555 EAL: Trying to obtain current memory policy. 00:05:04.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.813 EAL: Restoring previous memory policy: 4 00:05:04.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.813 EAL: request: mp_malloc_sync 00:05:04.813 EAL: No shared files mode enabled, IPC is disabled 00:05:04.813 EAL: Heap on socket 0 was expanded by 1026MB 00:05:04.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.078 EAL: request: mp_malloc_sync 00:05:05.078 EAL: No shared files mode enabled, IPC is disabled 00:05:05.078 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.078 passed 00:05:05.078 00:05:05.078 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.078 suites 1 1 n/a 0 0 00:05:05.078 tests 2 2 2 0 0 00:05:05.078 asserts 5421 5421 5421 0 n/a 00:05:05.078 00:05:05.078 Elapsed time = 1.286 seconds 00:05:05.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.078 EAL: request: mp_malloc_sync 00:05:05.078 EAL: No shared files mode enabled, IPC is disabled 00:05:05.078 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.078 EAL: No shared files mode enabled, IPC is disabled 00:05:05.078 EAL: No shared files mode enabled, IPC is disabled 00:05:05.078 EAL: No shared files mode enabled, IPC is disabled 00:05:05.078 00:05:05.078 real 0m1.495s 00:05:05.078 user 0m0.804s 00:05:05.078 sys 0m0.545s 00:05:05.078 06:51:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.078 06:51:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.078 ************************************ 00:05:05.078 END TEST env_vtophys 00:05:05.078 ************************************ 00:05:05.078 06:51:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.078 06:51:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:05.078 06:51:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.078 06:51:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.078 06:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.078 ************************************ 00:05:05.078 START TEST env_pci 00:05:05.078 ************************************ 00:05:05.078 06:51:13 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:05.384 00:05:05.384 00:05:05.384 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.384 http://cunit.sourceforge.net/ 00:05:05.384 00:05:05.384 00:05:05.384 Suite: pci 00:05:05.384 Test: pci_hook ...[2024-07-13 06:51:13.156953] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72905 has claimed it 00:05:05.384 passed 00:05:05.384 00:05:05.384 EAL: Cannot find device (10000:00:01.0) 00:05:05.384 EAL: Failed to attach device on primary process 00:05:05.384 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.384 suites 1 1 n/a 0 0 00:05:05.384 tests 1 1 1 0 0 00:05:05.384 asserts 25 25 25 0 n/a 00:05:05.384 00:05:05.384 Elapsed time = 0.002 seconds 00:05:05.384 00:05:05.384 real 0m0.020s 00:05:05.384 user 0m0.010s 00:05:05.384 sys 0m0.010s 00:05:05.384 06:51:13 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.384 06:51:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.384 ************************************ 00:05:05.384 END TEST env_pci 00:05:05.384 ************************************ 00:05:05.384 06:51:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.384 06:51:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.384 06:51:13 env -- env/env.sh@15 -- # uname 00:05:05.384 06:51:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.384 06:51:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.384 06:51:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.384 06:51:13 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:05.384 06:51:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.384 06:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.384 ************************************ 00:05:05.384 START TEST env_dpdk_post_init 00:05:05.384 ************************************ 00:05:05.384 06:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.384 EAL: Detected CPU lcores: 10 00:05:05.384 EAL: Detected NUMA nodes: 1 00:05:05.384 EAL: Detected shared linkage of DPDK 00:05:05.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.384 EAL: Selected IOVA mode 'PA' 00:05:05.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:05.384 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:05.384 Starting DPDK initialization... 00:05:05.384 Starting SPDK post initialization... 00:05:05.384 SPDK NVMe probe 00:05:05.384 Attaching to 0000:00:10.0 00:05:05.384 Attaching to 0000:00:11.0 00:05:05.384 Attached to 0000:00:10.0 00:05:05.384 Attached to 0000:00:11.0 00:05:05.384 Cleaning up... 00:05:05.384 00:05:05.384 real 0m0.174s 00:05:05.384 user 0m0.044s 00:05:05.384 sys 0m0.031s 00:05:05.384 06:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.384 06:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.384 ************************************ 00:05:05.384 END TEST env_dpdk_post_init 00:05:05.384 ************************************ 00:05:05.384 06:51:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.384 06:51:13 env -- env/env.sh@26 -- # uname 00:05:05.384 06:51:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.384 06:51:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.384 06:51:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.384 06:51:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.384 06:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.656 ************************************ 00:05:05.656 START TEST env_mem_callbacks 00:05:05.656 ************************************ 00:05:05.656 06:51:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.656 EAL: Detected CPU lcores: 10 00:05:05.656 EAL: Detected NUMA nodes: 1 00:05:05.656 EAL: Detected shared linkage of DPDK 00:05:05.656 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.656 EAL: Selected IOVA mode 'PA' 00:05:05.656 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.656 00:05:05.656 00:05:05.656 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.656 http://cunit.sourceforge.net/ 00:05:05.656 00:05:05.656 00:05:05.656 Suite: memory 00:05:05.656 Test: test ... 00:05:05.656 register 0x200000200000 2097152 00:05:05.656 malloc 3145728 00:05:05.656 register 0x200000400000 4194304 00:05:05.656 buf 0x200000500000 len 3145728 PASSED 00:05:05.656 malloc 64 00:05:05.656 buf 0x2000004fff40 len 64 PASSED 00:05:05.656 malloc 4194304 00:05:05.656 register 0x200000800000 6291456 00:05:05.656 buf 0x200000a00000 len 4194304 PASSED 00:05:05.656 free 0x200000500000 3145728 00:05:05.656 free 0x2000004fff40 64 00:05:05.656 unregister 0x200000400000 4194304 PASSED 00:05:05.656 free 0x200000a00000 4194304 00:05:05.656 unregister 0x200000800000 6291456 PASSED 00:05:05.656 malloc 8388608 00:05:05.656 register 0x200000400000 10485760 00:05:05.656 buf 0x200000600000 len 8388608 PASSED 00:05:05.656 free 0x200000600000 8388608 00:05:05.656 unregister 0x200000400000 10485760 PASSED 00:05:05.656 passed 00:05:05.656 00:05:05.656 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.656 suites 1 1 n/a 0 0 00:05:05.656 tests 1 1 1 0 0 00:05:05.656 asserts 15 15 15 0 n/a 00:05:05.656 00:05:05.656 Elapsed time = 0.009 seconds 00:05:05.656 00:05:05.656 real 0m0.149s 00:05:05.656 user 0m0.018s 00:05:05.656 sys 0m0.030s 00:05:05.656 06:51:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.656 06:51:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:05.656 ************************************ 00:05:05.656 END TEST env_mem_callbacks 00:05:05.656 ************************************ 00:05:05.656 06:51:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.656 ************************************ 00:05:05.656 END TEST env 00:05:05.656 ************************************ 00:05:05.656 00:05:05.656 real 0m2.373s 00:05:05.656 user 0m1.165s 00:05:05.656 sys 0m0.842s 00:05:05.656 06:51:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.656 06:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.656 06:51:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.656 06:51:13 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:05.656 06:51:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.656 06:51:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.656 06:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.656 ************************************ 00:05:05.656 START TEST rpc 00:05:05.656 ************************************ 00:05:05.656 06:51:13 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:05.915 * Looking for test storage... 00:05:05.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:05.915 06:51:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73020 00:05:05.915 06:51:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:05.915 06:51:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.915 06:51:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73020 00:05:05.915 06:51:13 rpc -- common/autotest_common.sh@829 -- # '[' -z 73020 ']' 00:05:05.915 06:51:13 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.915 06:51:13 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.915 06:51:13 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.915 06:51:13 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.915 06:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.915 [2024-07-13 06:51:13.847260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:05.915 [2024-07-13 06:51:13.847389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73020 ] 00:05:05.915 [2024-07-13 06:51:13.987606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.173 [2024-07-13 06:51:14.090641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:06.173 [2024-07-13 06:51:14.090697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73020' to capture a snapshot of events at runtime. 00:05:06.173 [2024-07-13 06:51:14.090709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.173 [2024-07-13 06:51:14.090717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.173 [2024-07-13 06:51:14.090725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73020 for offline analysis/debug. 00:05:06.173 [2024-07-13 06:51:14.090757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.108 06:51:14 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.108 06:51:14 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:07.108 06:51:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.108 06:51:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.108 06:51:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.108 06:51:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.108 06:51:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.108 06:51:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.108 06:51:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.108 ************************************ 00:05:07.108 START TEST rpc_integrity 00:05:07.108 ************************************ 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.108 { 00:05:07.108 "aliases": [ 00:05:07.108 "70a0aecf-5b51-41af-99b4-0a26a4d1bbfb" 00:05:07.108 ], 00:05:07.108 "assigned_rate_limits": { 00:05:07.108 "r_mbytes_per_sec": 0, 00:05:07.108 "rw_ios_per_sec": 0, 00:05:07.108 "rw_mbytes_per_sec": 0, 00:05:07.108 "w_mbytes_per_sec": 0 00:05:07.108 }, 00:05:07.108 "block_size": 512, 00:05:07.108 "claimed": false, 00:05:07.108 "driver_specific": {}, 00:05:07.108 "memory_domains": [ 00:05:07.108 { 00:05:07.108 "dma_device_id": "system", 00:05:07.108 "dma_device_type": 1 00:05:07.108 }, 00:05:07.108 { 00:05:07.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.108 "dma_device_type": 2 00:05:07.108 } 00:05:07.108 ], 00:05:07.108 "name": "Malloc0", 00:05:07.108 "num_blocks": 16384, 00:05:07.108 "product_name": "Malloc disk", 00:05:07.108 "supported_io_types": { 00:05:07.108 "abort": true, 00:05:07.108 "compare": false, 00:05:07.108 "compare_and_write": false, 00:05:07.108 "copy": true, 00:05:07.108 "flush": true, 00:05:07.108 "get_zone_info": false, 00:05:07.108 "nvme_admin": false, 00:05:07.108 "nvme_io": false, 00:05:07.108 "nvme_io_md": false, 00:05:07.108 "nvme_iov_md": false, 00:05:07.108 "read": true, 00:05:07.108 "reset": true, 00:05:07.108 "seek_data": false, 00:05:07.108 "seek_hole": false, 00:05:07.108 "unmap": true, 00:05:07.108 "write": true, 00:05:07.108 "write_zeroes": true, 00:05:07.108 "zcopy": true, 00:05:07.108 "zone_append": false, 00:05:07.108 "zone_management": false 00:05:07.108 }, 00:05:07.108 "uuid": "70a0aecf-5b51-41af-99b4-0a26a4d1bbfb", 00:05:07.108 "zoned": false 00:05:07.108 } 00:05:07.108 ]' 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.108 [2024-07-13 06:51:14.991823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:07.108 [2024-07-13 06:51:14.991871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.108 [2024-07-13 06:51:14.991891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe57390 00:05:07.108 [2024-07-13 06:51:14.991901] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.108 [2024-07-13 06:51:14.993619] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.108 [2024-07-13 06:51:14.993652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.108 Passthru0 00:05:07.108 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.108 06:51:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.109 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.109 06:51:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.109 { 00:05:07.109 "aliases": [ 00:05:07.109 "70a0aecf-5b51-41af-99b4-0a26a4d1bbfb" 00:05:07.109 ], 00:05:07.109 "assigned_rate_limits": { 00:05:07.109 "r_mbytes_per_sec": 0, 00:05:07.109 "rw_ios_per_sec": 0, 00:05:07.109 "rw_mbytes_per_sec": 0, 00:05:07.109 "w_mbytes_per_sec": 0 00:05:07.109 }, 00:05:07.109 "block_size": 512, 00:05:07.109 "claim_type": "exclusive_write", 00:05:07.109 "claimed": true, 00:05:07.109 "driver_specific": {}, 00:05:07.109 "memory_domains": [ 00:05:07.109 { 00:05:07.109 "dma_device_id": "system", 00:05:07.109 "dma_device_type": 1 00:05:07.109 }, 00:05:07.109 { 00:05:07.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.109 "dma_device_type": 2 00:05:07.109 } 00:05:07.109 ], 00:05:07.109 "name": "Malloc0", 00:05:07.109 "num_blocks": 16384, 00:05:07.109 "product_name": "Malloc disk", 00:05:07.109 "supported_io_types": { 00:05:07.109 "abort": true, 00:05:07.109 "compare": false, 00:05:07.109 "compare_and_write": false, 00:05:07.109 "copy": true, 00:05:07.109 "flush": true, 00:05:07.109 "get_zone_info": false, 00:05:07.109 "nvme_admin": false, 00:05:07.109 "nvme_io": false, 00:05:07.109 "nvme_io_md": false, 00:05:07.109 "nvme_iov_md": false, 00:05:07.109 "read": true, 00:05:07.109 "reset": true, 00:05:07.109 "seek_data": false, 00:05:07.109 "seek_hole": false, 00:05:07.109 "unmap": true, 00:05:07.109 "write": true, 00:05:07.109 "write_zeroes": true, 00:05:07.109 "zcopy": true, 00:05:07.109 "zone_append": false, 00:05:07.109 "zone_management": false 00:05:07.109 }, 00:05:07.109 "uuid": "70a0aecf-5b51-41af-99b4-0a26a4d1bbfb", 00:05:07.109 "zoned": false 00:05:07.109 }, 00:05:07.109 { 00:05:07.109 "aliases": [ 00:05:07.109 "d1e61921-c419-553e-b102-a151e1847d7f" 00:05:07.109 ], 00:05:07.109 "assigned_rate_limits": { 00:05:07.109 "r_mbytes_per_sec": 0, 00:05:07.109 "rw_ios_per_sec": 0, 00:05:07.109 "rw_mbytes_per_sec": 0, 00:05:07.109 "w_mbytes_per_sec": 0 00:05:07.109 }, 00:05:07.109 "block_size": 512, 00:05:07.109 "claimed": false, 00:05:07.109 "driver_specific": { 00:05:07.109 "passthru": { 00:05:07.109 "base_bdev_name": "Malloc0", 00:05:07.109 "name": "Passthru0" 00:05:07.109 } 00:05:07.109 }, 00:05:07.109 "memory_domains": [ 00:05:07.109 { 00:05:07.109 "dma_device_id": "system", 00:05:07.109 "dma_device_type": 1 00:05:07.109 }, 00:05:07.109 { 00:05:07.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.109 "dma_device_type": 2 00:05:07.109 } 00:05:07.109 ], 00:05:07.109 "name": "Passthru0", 00:05:07.109 "num_blocks": 16384, 00:05:07.109 "product_name": "passthru", 00:05:07.109 "supported_io_types": { 00:05:07.109 "abort": true, 00:05:07.109 "compare": false, 00:05:07.109 "compare_and_write": false, 00:05:07.109 "copy": true, 00:05:07.109 "flush": true, 00:05:07.109 "get_zone_info": false, 00:05:07.109 "nvme_admin": false, 00:05:07.109 "nvme_io": false, 00:05:07.109 "nvme_io_md": false, 00:05:07.109 "nvme_iov_md": false, 00:05:07.109 "read": true, 00:05:07.109 "reset": true, 00:05:07.109 "seek_data": false, 00:05:07.109 "seek_hole": false, 00:05:07.109 "unmap": true, 00:05:07.109 "write": true, 00:05:07.109 "write_zeroes": true, 00:05:07.109 "zcopy": true, 00:05:07.109 "zone_append": false, 00:05:07.109 "zone_management": false 00:05:07.109 }, 00:05:07.109 "uuid": "d1e61921-c419-553e-b102-a151e1847d7f", 00:05:07.109 "zoned": false 00:05:07.109 } 00:05:07.109 ]' 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.109 06:51:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.109 00:05:07.109 real 0m0.337s 00:05:07.109 user 0m0.219s 00:05:07.109 sys 0m0.036s 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.109 ************************************ 00:05:07.109 END TEST rpc_integrity 00:05:07.109 06:51:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.109 ************************************ 00:05:07.367 06:51:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:07.368 06:51:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 ************************************ 00:05:07.368 START TEST rpc_plugins 00:05:07.368 ************************************ 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:07.368 { 00:05:07.368 "aliases": [ 00:05:07.368 "84b6f4c3-4c22-4099-b235-b3034c2ad23f" 00:05:07.368 ], 00:05:07.368 "assigned_rate_limits": { 00:05:07.368 "r_mbytes_per_sec": 0, 00:05:07.368 "rw_ios_per_sec": 0, 00:05:07.368 "rw_mbytes_per_sec": 0, 00:05:07.368 "w_mbytes_per_sec": 0 00:05:07.368 }, 00:05:07.368 "block_size": 4096, 00:05:07.368 "claimed": false, 00:05:07.368 "driver_specific": {}, 00:05:07.368 "memory_domains": [ 00:05:07.368 { 00:05:07.368 "dma_device_id": "system", 00:05:07.368 "dma_device_type": 1 00:05:07.368 }, 00:05:07.368 { 00:05:07.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.368 "dma_device_type": 2 00:05:07.368 } 00:05:07.368 ], 00:05:07.368 "name": "Malloc1", 00:05:07.368 "num_blocks": 256, 00:05:07.368 "product_name": "Malloc disk", 00:05:07.368 "supported_io_types": { 00:05:07.368 "abort": true, 00:05:07.368 "compare": false, 00:05:07.368 "compare_and_write": false, 00:05:07.368 "copy": true, 00:05:07.368 "flush": true, 00:05:07.368 "get_zone_info": false, 00:05:07.368 "nvme_admin": false, 00:05:07.368 "nvme_io": false, 00:05:07.368 "nvme_io_md": false, 00:05:07.368 "nvme_iov_md": false, 00:05:07.368 "read": true, 00:05:07.368 "reset": true, 00:05:07.368 "seek_data": false, 00:05:07.368 "seek_hole": false, 00:05:07.368 "unmap": true, 00:05:07.368 "write": true, 00:05:07.368 "write_zeroes": true, 00:05:07.368 "zcopy": true, 00:05:07.368 "zone_append": false, 00:05:07.368 "zone_management": false 00:05:07.368 }, 00:05:07.368 "uuid": "84b6f4c3-4c22-4099-b235-b3034c2ad23f", 00:05:07.368 "zoned": false 00:05:07.368 } 00:05:07.368 ]' 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:07.368 06:51:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:07.368 00:05:07.368 real 0m0.166s 00:05:07.368 user 0m0.105s 00:05:07.368 sys 0m0.018s 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.368 ************************************ 00:05:07.368 END TEST rpc_plugins 00:05:07.368 06:51:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 ************************************ 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:07.368 06:51:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.368 06:51:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.368 ************************************ 00:05:07.368 START TEST rpc_trace_cmd_test 00:05:07.368 ************************************ 00:05:07.368 06:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:07.368 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:07.368 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:07.368 06:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.368 06:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.626 06:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.626 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:07.626 "bdev": { 00:05:07.626 "mask": "0x8", 00:05:07.626 "tpoint_mask": "0xffffffffffffffff" 00:05:07.626 }, 00:05:07.626 "bdev_nvme": { 00:05:07.626 "mask": "0x4000", 00:05:07.626 "tpoint_mask": "0x0" 00:05:07.626 }, 00:05:07.626 "blobfs": { 00:05:07.626 "mask": "0x80", 00:05:07.626 "tpoint_mask": "0x0" 00:05:07.626 }, 00:05:07.626 "dsa": { 00:05:07.626 "mask": "0x200", 00:05:07.626 "tpoint_mask": "0x0" 00:05:07.626 }, 00:05:07.626 "ftl": { 00:05:07.626 "mask": "0x40", 00:05:07.626 "tpoint_mask": "0x0" 00:05:07.626 }, 00:05:07.626 "iaa": { 00:05:07.626 "mask": "0x1000", 00:05:07.626 "tpoint_mask": "0x0" 00:05:07.626 }, 00:05:07.626 "iscsi_conn": { 00:05:07.626 "mask": "0x2", 00:05:07.626 "tpoint_mask": "0x0" 00:05:07.626 }, 00:05:07.627 "nvme_pcie": { 00:05:07.627 "mask": "0x800", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "nvme_tcp": { 00:05:07.627 "mask": "0x2000", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "nvmf_rdma": { 00:05:07.627 "mask": "0x10", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "nvmf_tcp": { 00:05:07.627 "mask": "0x20", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "scsi": { 00:05:07.627 "mask": "0x4", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "sock": { 00:05:07.627 "mask": "0x8000", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "thread": { 00:05:07.627 "mask": "0x400", 00:05:07.627 "tpoint_mask": "0x0" 00:05:07.627 }, 00:05:07.627 "tpoint_group_mask": "0x8", 00:05:07.627 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73020" 00:05:07.627 }' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:07.627 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:07.884 06:51:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:07.884 00:05:07.884 real 0m0.287s 00:05:07.884 user 0m0.241s 00:05:07.884 sys 0m0.034s 00:05:07.885 06:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.885 06:51:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.885 ************************************ 00:05:07.885 END TEST rpc_trace_cmd_test 00:05:07.885 ************************************ 00:05:07.885 06:51:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:07.885 06:51:15 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:07.885 06:51:15 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:07.885 06:51:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.885 06:51:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.885 06:51:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.885 ************************************ 00:05:07.885 START TEST go_rpc 00:05:07.885 ************************************ 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["cd1d1116-0ab3-4830-8e39-fa0a2d3d6fc5"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"cd1d1116-0ab3-4830-8e39-fa0a2d3d6fc5","zoned":false}]' 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.885 06:51:15 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:07.885 06:51:15 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:08.143 06:51:16 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:08.143 00:05:08.143 real 0m0.229s 00:05:08.143 user 0m0.157s 00:05:08.143 sys 0m0.037s 00:05:08.143 06:51:16 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.143 06:51:16 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.143 ************************************ 00:05:08.143 END TEST go_rpc 00:05:08.143 ************************************ 00:05:08.143 06:51:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.143 06:51:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:08.143 06:51:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:08.143 06:51:16 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.143 06:51:16 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.143 06:51:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.143 ************************************ 00:05:08.143 START TEST rpc_daemon_integrity 00:05:08.143 ************************************ 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.143 { 00:05:08.143 "aliases": [ 00:05:08.143 "8e5c4235-b1ba-400a-bf50-9564f30adb96" 00:05:08.143 ], 00:05:08.143 "assigned_rate_limits": { 00:05:08.143 "r_mbytes_per_sec": 0, 00:05:08.143 "rw_ios_per_sec": 0, 00:05:08.143 "rw_mbytes_per_sec": 0, 00:05:08.143 "w_mbytes_per_sec": 0 00:05:08.143 }, 00:05:08.143 "block_size": 512, 00:05:08.143 "claimed": false, 00:05:08.143 "driver_specific": {}, 00:05:08.143 "memory_domains": [ 00:05:08.143 { 00:05:08.143 "dma_device_id": "system", 00:05:08.143 "dma_device_type": 1 00:05:08.143 }, 00:05:08.143 { 00:05:08.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.143 "dma_device_type": 2 00:05:08.143 } 00:05:08.143 ], 00:05:08.143 "name": "Malloc3", 00:05:08.143 "num_blocks": 16384, 00:05:08.143 "product_name": "Malloc disk", 00:05:08.143 "supported_io_types": { 00:05:08.143 "abort": true, 00:05:08.143 "compare": false, 00:05:08.143 "compare_and_write": false, 00:05:08.143 "copy": true, 00:05:08.143 "flush": true, 00:05:08.143 "get_zone_info": false, 00:05:08.143 "nvme_admin": false, 00:05:08.143 "nvme_io": false, 00:05:08.143 "nvme_io_md": false, 00:05:08.143 "nvme_iov_md": false, 00:05:08.143 "read": true, 00:05:08.143 "reset": true, 00:05:08.143 "seek_data": false, 00:05:08.143 "seek_hole": false, 00:05:08.143 "unmap": true, 00:05:08.143 "write": true, 00:05:08.143 "write_zeroes": true, 00:05:08.143 "zcopy": true, 00:05:08.143 "zone_append": false, 00:05:08.143 "zone_management": false 00:05:08.143 }, 00:05:08.143 "uuid": "8e5c4235-b1ba-400a-bf50-9564f30adb96", 00:05:08.143 "zoned": false 00:05:08.143 } 00:05:08.143 ]' 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.143 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.401 [2024-07-13 06:51:16.222374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:08.401 [2024-07-13 06:51:16.222434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.401 [2024-07-13 06:51:16.222457] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1008b50 00:05:08.401 [2024-07-13 06:51:16.222468] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.401 [2024-07-13 06:51:16.223995] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.401 [2024-07-13 06:51:16.224027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.401 Passthru0 00:05:08.401 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.401 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.401 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.401 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.401 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.401 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.401 { 00:05:08.401 "aliases": [ 00:05:08.401 "8e5c4235-b1ba-400a-bf50-9564f30adb96" 00:05:08.401 ], 00:05:08.401 "assigned_rate_limits": { 00:05:08.401 "r_mbytes_per_sec": 0, 00:05:08.401 "rw_ios_per_sec": 0, 00:05:08.401 "rw_mbytes_per_sec": 0, 00:05:08.401 "w_mbytes_per_sec": 0 00:05:08.401 }, 00:05:08.401 "block_size": 512, 00:05:08.401 "claim_type": "exclusive_write", 00:05:08.401 "claimed": true, 00:05:08.401 "driver_specific": {}, 00:05:08.401 "memory_domains": [ 00:05:08.401 { 00:05:08.401 "dma_device_id": "system", 00:05:08.401 "dma_device_type": 1 00:05:08.401 }, 00:05:08.401 { 00:05:08.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.401 "dma_device_type": 2 00:05:08.401 } 00:05:08.401 ], 00:05:08.401 "name": "Malloc3", 00:05:08.401 "num_blocks": 16384, 00:05:08.401 "product_name": "Malloc disk", 00:05:08.401 "supported_io_types": { 00:05:08.401 "abort": true, 00:05:08.401 "compare": false, 00:05:08.401 "compare_and_write": false, 00:05:08.401 "copy": true, 00:05:08.401 "flush": true, 00:05:08.401 "get_zone_info": false, 00:05:08.401 "nvme_admin": false, 00:05:08.401 "nvme_io": false, 00:05:08.401 "nvme_io_md": false, 00:05:08.401 "nvme_iov_md": false, 00:05:08.401 "read": true, 00:05:08.401 "reset": true, 00:05:08.401 "seek_data": false, 00:05:08.401 "seek_hole": false, 00:05:08.401 "unmap": true, 00:05:08.401 "write": true, 00:05:08.401 "write_zeroes": true, 00:05:08.401 "zcopy": true, 00:05:08.401 "zone_append": false, 00:05:08.401 "zone_management": false 00:05:08.401 }, 00:05:08.401 "uuid": "8e5c4235-b1ba-400a-bf50-9564f30adb96", 00:05:08.401 "zoned": false 00:05:08.401 }, 00:05:08.401 { 00:05:08.401 "aliases": [ 00:05:08.401 "7ae114d9-1e06-53cf-a378-bb86dfd4deb7" 00:05:08.401 ], 00:05:08.401 "assigned_rate_limits": { 00:05:08.401 "r_mbytes_per_sec": 0, 00:05:08.402 "rw_ios_per_sec": 0, 00:05:08.402 "rw_mbytes_per_sec": 0, 00:05:08.402 "w_mbytes_per_sec": 0 00:05:08.402 }, 00:05:08.402 "block_size": 512, 00:05:08.402 "claimed": false, 00:05:08.402 "driver_specific": { 00:05:08.402 "passthru": { 00:05:08.402 "base_bdev_name": "Malloc3", 00:05:08.402 "name": "Passthru0" 00:05:08.402 } 00:05:08.402 }, 00:05:08.402 "memory_domains": [ 00:05:08.402 { 00:05:08.402 "dma_device_id": "system", 00:05:08.402 "dma_device_type": 1 00:05:08.402 }, 00:05:08.402 { 00:05:08.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.402 "dma_device_type": 2 00:05:08.402 } 00:05:08.402 ], 00:05:08.402 "name": "Passthru0", 00:05:08.402 "num_blocks": 16384, 00:05:08.402 "product_name": "passthru", 00:05:08.402 "supported_io_types": { 00:05:08.402 "abort": true, 00:05:08.402 "compare": false, 00:05:08.402 "compare_and_write": false, 00:05:08.402 "copy": true, 00:05:08.402 "flush": true, 00:05:08.402 "get_zone_info": false, 00:05:08.402 "nvme_admin": false, 00:05:08.402 "nvme_io": false, 00:05:08.402 "nvme_io_md": false, 00:05:08.402 "nvme_iov_md": false, 00:05:08.402 "read": true, 00:05:08.402 "reset": true, 00:05:08.402 "seek_data": false, 00:05:08.402 "seek_hole": false, 00:05:08.402 "unmap": true, 00:05:08.402 "write": true, 00:05:08.402 "write_zeroes": true, 00:05:08.402 "zcopy": true, 00:05:08.402 "zone_append": false, 00:05:08.402 "zone_management": false 00:05:08.402 }, 00:05:08.402 "uuid": "7ae114d9-1e06-53cf-a378-bb86dfd4deb7", 00:05:08.402 "zoned": false 00:05:08.402 } 00:05:08.402 ]' 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.402 00:05:08.402 real 0m0.343s 00:05:08.402 user 0m0.211s 00:05:08.402 sys 0m0.046s 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.402 ************************************ 00:05:08.402 END TEST rpc_daemon_integrity 00:05:08.402 06:51:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.402 ************************************ 00:05:08.402 06:51:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.402 06:51:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:08.402 06:51:16 rpc -- rpc/rpc.sh@84 -- # killprocess 73020 00:05:08.402 06:51:16 rpc -- common/autotest_common.sh@948 -- # '[' -z 73020 ']' 00:05:08.402 06:51:16 rpc -- common/autotest_common.sh@952 -- # kill -0 73020 00:05:08.402 06:51:16 rpc -- common/autotest_common.sh@953 -- # uname 00:05:08.402 06:51:16 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.402 06:51:16 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73020 00:05:08.660 06:51:16 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.660 06:51:16 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.660 killing process with pid 73020 00:05:08.660 06:51:16 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73020' 00:05:08.660 06:51:16 rpc -- common/autotest_common.sh@967 -- # kill 73020 00:05:08.660 06:51:16 rpc -- common/autotest_common.sh@972 -- # wait 73020 00:05:09.226 00:05:09.226 real 0m3.352s 00:05:09.226 user 0m4.347s 00:05:09.226 sys 0m0.770s 00:05:09.226 06:51:17 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.226 06:51:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.226 ************************************ 00:05:09.226 END TEST rpc 00:05:09.226 ************************************ 00:05:09.226 06:51:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.226 06:51:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:09.226 06:51:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.226 06:51:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.226 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.226 ************************************ 00:05:09.226 START TEST skip_rpc 00:05:09.226 ************************************ 00:05:09.226 06:51:17 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:09.226 * Looking for test storage... 00:05:09.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.226 06:51:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.226 06:51:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:09.226 06:51:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:09.226 06:51:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.226 06:51:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.226 06:51:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.226 ************************************ 00:05:09.226 START TEST skip_rpc 00:05:09.226 ************************************ 00:05:09.226 06:51:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:09.226 06:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73281 00:05:09.226 06:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.226 06:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:09.226 06:51:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:09.226 [2024-07-13 06:51:17.254792] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:09.226 [2024-07-13 06:51:17.255580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73281 ] 00:05:09.484 [2024-07-13 06:51:17.393278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.484 [2024-07-13 06:51:17.505915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.772 2024/07/13 06:51:22 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 73281 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 73281 ']' 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 73281 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73281 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73281' 00:05:14.772 killing process with pid 73281 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 73281 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 73281 00:05:14.772 00:05:14.772 real 0m5.595s 00:05:14.772 user 0m5.095s 00:05:14.772 sys 0m0.399s 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.772 06:51:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.772 ************************************ 00:05:14.772 END TEST skip_rpc 00:05:14.772 ************************************ 00:05:14.772 06:51:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:14.772 06:51:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:14.772 06:51:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.772 06:51:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.772 06:51:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.772 ************************************ 00:05:14.772 START TEST skip_rpc_with_json 00:05:14.772 ************************************ 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73379 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73379 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 73379 ']' 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.772 06:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.030 [2024-07-13 06:51:22.902188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:15.030 [2024-07-13 06:51:22.902304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73379 ] 00:05:15.030 [2024-07-13 06:51:23.041942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.288 [2024-07-13 06:51:23.166166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.855 [2024-07-13 06:51:23.905806] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:15.855 2024/07/13 06:51:23 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:15.855 request: 00:05:15.855 { 00:05:15.855 "method": "nvmf_get_transports", 00:05:15.855 "params": { 00:05:15.855 "trtype": "tcp" 00:05:15.855 } 00:05:15.855 } 00:05:15.855 Got JSON-RPC error response 00:05:15.855 GoRPCClient: error on JSON-RPC call 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.855 [2024-07-13 06:51:23.917975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.855 06:51:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.113 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.113 06:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.113 { 00:05:16.113 "subsystems": [ 00:05:16.113 { 00:05:16.113 "subsystem": "keyring", 00:05:16.113 "config": [] 00:05:16.113 }, 00:05:16.113 { 00:05:16.113 "subsystem": "iobuf", 00:05:16.113 "config": [ 00:05:16.113 { 00:05:16.113 "method": "iobuf_set_options", 00:05:16.113 "params": { 00:05:16.113 "large_bufsize": 135168, 00:05:16.113 "large_pool_count": 1024, 00:05:16.113 "small_bufsize": 8192, 00:05:16.113 "small_pool_count": 8192 00:05:16.113 } 00:05:16.113 } 00:05:16.113 ] 00:05:16.113 }, 00:05:16.113 { 00:05:16.113 "subsystem": "sock", 00:05:16.113 "config": [ 00:05:16.113 { 00:05:16.113 "method": "sock_set_default_impl", 00:05:16.113 "params": { 00:05:16.113 "impl_name": "posix" 00:05:16.113 } 00:05:16.113 }, 00:05:16.113 { 00:05:16.113 "method": "sock_impl_set_options", 00:05:16.113 "params": { 00:05:16.113 "enable_ktls": false, 00:05:16.113 "enable_placement_id": 0, 00:05:16.113 "enable_quickack": false, 00:05:16.113 "enable_recv_pipe": true, 00:05:16.113 "enable_zerocopy_send_client": false, 00:05:16.113 "enable_zerocopy_send_server": true, 00:05:16.113 "impl_name": "ssl", 00:05:16.113 "recv_buf_size": 4096, 00:05:16.114 "send_buf_size": 4096, 00:05:16.114 "tls_version": 0, 00:05:16.114 "zerocopy_threshold": 0 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "sock_impl_set_options", 00:05:16.114 "params": { 00:05:16.114 "enable_ktls": false, 00:05:16.114 "enable_placement_id": 0, 00:05:16.114 "enable_quickack": false, 00:05:16.114 "enable_recv_pipe": true, 00:05:16.114 "enable_zerocopy_send_client": false, 00:05:16.114 "enable_zerocopy_send_server": true, 00:05:16.114 "impl_name": "posix", 00:05:16.114 "recv_buf_size": 2097152, 00:05:16.114 "send_buf_size": 2097152, 00:05:16.114 "tls_version": 0, 00:05:16.114 "zerocopy_threshold": 0 00:05:16.114 } 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "vmd", 00:05:16.114 "config": [] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "accel", 00:05:16.114 "config": [ 00:05:16.114 { 00:05:16.114 "method": "accel_set_options", 00:05:16.114 "params": { 00:05:16.114 "buf_count": 2048, 00:05:16.114 "large_cache_size": 16, 00:05:16.114 "sequence_count": 2048, 00:05:16.114 "small_cache_size": 128, 00:05:16.114 "task_count": 2048 00:05:16.114 } 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "bdev", 00:05:16.114 "config": [ 00:05:16.114 { 00:05:16.114 "method": "bdev_set_options", 00:05:16.114 "params": { 00:05:16.114 "bdev_auto_examine": true, 00:05:16.114 "bdev_io_cache_size": 256, 00:05:16.114 "bdev_io_pool_size": 65535, 00:05:16.114 "iobuf_large_cache_size": 16, 00:05:16.114 "iobuf_small_cache_size": 128 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "bdev_raid_set_options", 00:05:16.114 "params": { 00:05:16.114 "process_window_size_kb": 1024 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "bdev_iscsi_set_options", 00:05:16.114 "params": { 00:05:16.114 "timeout_sec": 30 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "bdev_nvme_set_options", 00:05:16.114 "params": { 00:05:16.114 "action_on_timeout": "none", 00:05:16.114 "allow_accel_sequence": false, 00:05:16.114 "arbitration_burst": 0, 00:05:16.114 "bdev_retry_count": 3, 00:05:16.114 "ctrlr_loss_timeout_sec": 0, 00:05:16.114 "delay_cmd_submit": true, 00:05:16.114 "dhchap_dhgroups": [ 00:05:16.114 "null", 00:05:16.114 "ffdhe2048", 00:05:16.114 "ffdhe3072", 00:05:16.114 "ffdhe4096", 00:05:16.114 "ffdhe6144", 00:05:16.114 "ffdhe8192" 00:05:16.114 ], 00:05:16.114 "dhchap_digests": [ 00:05:16.114 "sha256", 00:05:16.114 "sha384", 00:05:16.114 "sha512" 00:05:16.114 ], 00:05:16.114 "disable_auto_failback": false, 00:05:16.114 "fast_io_fail_timeout_sec": 0, 00:05:16.114 "generate_uuids": false, 00:05:16.114 "high_priority_weight": 0, 00:05:16.114 "io_path_stat": false, 00:05:16.114 "io_queue_requests": 0, 00:05:16.114 "keep_alive_timeout_ms": 10000, 00:05:16.114 "low_priority_weight": 0, 00:05:16.114 "medium_priority_weight": 0, 00:05:16.114 "nvme_adminq_poll_period_us": 10000, 00:05:16.114 "nvme_error_stat": false, 00:05:16.114 "nvme_ioq_poll_period_us": 0, 00:05:16.114 "rdma_cm_event_timeout_ms": 0, 00:05:16.114 "rdma_max_cq_size": 0, 00:05:16.114 "rdma_srq_size": 0, 00:05:16.114 "reconnect_delay_sec": 0, 00:05:16.114 "timeout_admin_us": 0, 00:05:16.114 "timeout_us": 0, 00:05:16.114 "transport_ack_timeout": 0, 00:05:16.114 "transport_retry_count": 4, 00:05:16.114 "transport_tos": 0 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "bdev_nvme_set_hotplug", 00:05:16.114 "params": { 00:05:16.114 "enable": false, 00:05:16.114 "period_us": 100000 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "bdev_wait_for_examine" 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "scsi", 00:05:16.114 "config": null 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "scheduler", 00:05:16.114 "config": [ 00:05:16.114 { 00:05:16.114 "method": "framework_set_scheduler", 00:05:16.114 "params": { 00:05:16.114 "name": "static" 00:05:16.114 } 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "vhost_scsi", 00:05:16.114 "config": [] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "vhost_blk", 00:05:16.114 "config": [] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "ublk", 00:05:16.114 "config": [] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "nbd", 00:05:16.114 "config": [] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "nvmf", 00:05:16.114 "config": [ 00:05:16.114 { 00:05:16.114 "method": "nvmf_set_config", 00:05:16.114 "params": { 00:05:16.114 "admin_cmd_passthru": { 00:05:16.114 "identify_ctrlr": false 00:05:16.114 }, 00:05:16.114 "discovery_filter": "match_any" 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "nvmf_set_max_subsystems", 00:05:16.114 "params": { 00:05:16.114 "max_subsystems": 1024 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "nvmf_set_crdt", 00:05:16.114 "params": { 00:05:16.114 "crdt1": 0, 00:05:16.114 "crdt2": 0, 00:05:16.114 "crdt3": 0 00:05:16.114 } 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "method": "nvmf_create_transport", 00:05:16.114 "params": { 00:05:16.114 "abort_timeout_sec": 1, 00:05:16.114 "ack_timeout": 0, 00:05:16.114 "buf_cache_size": 4294967295, 00:05:16.114 "c2h_success": true, 00:05:16.114 "data_wr_pool_size": 0, 00:05:16.114 "dif_insert_or_strip": false, 00:05:16.114 "in_capsule_data_size": 4096, 00:05:16.114 "io_unit_size": 131072, 00:05:16.114 "max_aq_depth": 128, 00:05:16.114 "max_io_qpairs_per_ctrlr": 127, 00:05:16.114 "max_io_size": 131072, 00:05:16.114 "max_queue_depth": 128, 00:05:16.114 "num_shared_buffers": 511, 00:05:16.114 "sock_priority": 0, 00:05:16.114 "trtype": "TCP", 00:05:16.114 "zcopy": false 00:05:16.114 } 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 }, 00:05:16.114 { 00:05:16.114 "subsystem": "iscsi", 00:05:16.114 "config": [ 00:05:16.114 { 00:05:16.114 "method": "iscsi_set_options", 00:05:16.114 "params": { 00:05:16.114 "allow_duplicated_isid": false, 00:05:16.114 "chap_group": 0, 00:05:16.114 "data_out_pool_size": 2048, 00:05:16.114 "default_time2retain": 20, 00:05:16.114 "default_time2wait": 2, 00:05:16.114 "disable_chap": false, 00:05:16.114 "error_recovery_level": 0, 00:05:16.114 "first_burst_length": 8192, 00:05:16.114 "immediate_data": true, 00:05:16.114 "immediate_data_pool_size": 16384, 00:05:16.114 "max_connections_per_session": 2, 00:05:16.114 "max_large_datain_per_connection": 64, 00:05:16.114 "max_queue_depth": 64, 00:05:16.114 "max_r2t_per_connection": 4, 00:05:16.114 "max_sessions": 128, 00:05:16.114 "mutual_chap": false, 00:05:16.114 "node_base": "iqn.2016-06.io.spdk", 00:05:16.114 "nop_in_interval": 30, 00:05:16.114 "nop_timeout": 60, 00:05:16.114 "pdu_pool_size": 36864, 00:05:16.114 "require_chap": false 00:05:16.114 } 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 } 00:05:16.114 ] 00:05:16.114 } 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73379 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 73379 ']' 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 73379 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73379 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.114 killing process with pid 73379 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73379' 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 73379 00:05:16.114 06:51:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 73379 00:05:16.681 06:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73418 00:05:16.681 06:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.681 06:51:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73418 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 73418 ']' 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 73418 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73418 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.947 killing process with pid 73418 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73418' 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 73418 00:05:21.947 06:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 73418 00:05:22.513 06:51:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.513 06:51:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.513 00:05:22.513 real 0m7.476s 00:05:22.513 user 0m7.024s 00:05:22.513 sys 0m0.868s 00:05:22.513 06:51:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.513 06:51:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.513 ************************************ 00:05:22.513 END TEST skip_rpc_with_json 00:05:22.513 ************************************ 00:05:22.513 06:51:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.513 06:51:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.513 06:51:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.513 06:51:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.514 06:51:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.514 ************************************ 00:05:22.514 START TEST skip_rpc_with_delay 00:05:22.514 ************************************ 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.514 [2024-07-13 06:51:30.438006] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.514 [2024-07-13 06:51:30.438176] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.514 ************************************ 00:05:22.514 END TEST skip_rpc_with_delay 00:05:22.514 ************************************ 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.514 00:05:22.514 real 0m0.095s 00:05:22.514 user 0m0.058s 00:05:22.514 sys 0m0.036s 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.514 06:51:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.514 06:51:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.514 06:51:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.514 06:51:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.514 06:51:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.514 06:51:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.514 06:51:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.514 06:51:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.514 ************************************ 00:05:22.514 START TEST exit_on_failed_rpc_init 00:05:22.514 ************************************ 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:22.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73528 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73528 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 73528 ']' 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.514 06:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.514 [2024-07-13 06:51:30.585794] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:22.514 [2024-07-13 06:51:30.585928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73528 ] 00:05:22.773 [2024-07-13 06:51:30.721498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.773 [2024-07-13 06:51:30.819077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:23.709 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.709 [2024-07-13 06:51:31.628043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:23.709 [2024-07-13 06:51:31.628158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73558 ] 00:05:23.709 [2024-07-13 06:51:31.769789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.968 [2024-07-13 06:51:31.872798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.968 [2024-07-13 06:51:31.872926] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:23.968 [2024-07-13 06:51:31.872946] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:23.968 [2024-07-13 06:51:31.872958] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73528 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 73528 ']' 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 73528 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73528 00:05:23.968 killing process with pid 73528 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73528' 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 73528 00:05:23.968 06:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 73528 00:05:24.535 ************************************ 00:05:24.535 END TEST exit_on_failed_rpc_init 00:05:24.535 ************************************ 00:05:24.535 00:05:24.535 real 0m2.012s 00:05:24.535 user 0m2.239s 00:05:24.535 sys 0m0.513s 00:05:24.535 06:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.535 06:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.535 06:51:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.535 06:51:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.535 00:05:24.535 real 0m15.481s 00:05:24.535 user 0m14.521s 00:05:24.535 sys 0m2.002s 00:05:24.535 06:51:32 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.535 ************************************ 00:05:24.535 END TEST skip_rpc 00:05:24.535 ************************************ 00:05:24.535 06:51:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.793 06:51:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.793 06:51:32 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:24.793 06:51:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.793 06:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.793 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:24.793 ************************************ 00:05:24.793 START TEST rpc_client 00:05:24.793 ************************************ 00:05:24.793 06:51:32 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:24.793 * Looking for test storage... 00:05:24.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:24.793 06:51:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:24.793 OK 00:05:24.793 06:51:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.793 00:05:24.793 real 0m0.102s 00:05:24.793 user 0m0.054s 00:05:24.793 sys 0m0.054s 00:05:24.793 ************************************ 00:05:24.793 END TEST rpc_client 00:05:24.793 ************************************ 00:05:24.793 06:51:32 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.794 06:51:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:24.794 06:51:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.794 06:51:32 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.794 06:51:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.794 06:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.794 06:51:32 -- common/autotest_common.sh@10 -- # set +x 00:05:24.794 ************************************ 00:05:24.794 START TEST json_config 00:05:24.794 ************************************ 00:05:24.794 06:51:32 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.794 06:51:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.794 06:51:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.794 06:51:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.794 06:51:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.794 06:51:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.794 06:51:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.794 06:51:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.794 06:51:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@47 -- # : 0 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.794 06:51:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:24.794 06:51:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:25.051 06:51:32 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.051 06:51:32 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:25.051 INFO: JSON configuration test init 00:05:25.051 06:51:32 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:25.051 06:51:32 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.051 06:51:32 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.051 06:51:32 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:25.051 06:51:32 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.051 06:51:32 json_config -- json_config/common.sh@10 -- # shift 00:05:25.051 06:51:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.051 Waiting for target to run... 00:05:25.051 06:51:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.051 06:51:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.051 06:51:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.051 06:51:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.051 06:51:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73676 00:05:25.051 06:51:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.051 06:51:32 json_config -- json_config/common.sh@25 -- # waitforlisten 73676 /var/tmp/spdk_tgt.sock 00:05:25.051 06:51:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 73676 ']' 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.051 06:51:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.051 [2024-07-13 06:51:32.955765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:25.051 [2024-07-13 06:51:32.956031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73676 ] 00:05:25.308 [2024-07-13 06:51:33.377826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.567 [2024-07-13 06:51:33.464538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.824 06:51:33 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.824 00:05:25.824 06:51:33 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:25.824 06:51:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.824 06:51:33 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:25.824 06:51:33 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:25.824 06:51:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.824 06:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.824 06:51:33 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:25.824 06:51:33 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:25.824 06:51:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.824 06:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.081 06:51:33 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:26.081 06:51:33 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:26.081 06:51:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:26.339 06:51:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.339 06:51:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:26.339 06:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:26.339 06:51:34 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:26.906 06:51:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.906 06:51:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:26.906 06:51:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.906 06:51:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:26.906 06:51:34 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.906 06:51:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:27.163 MallocForNvmf0 00:05:27.163 06:51:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:27.163 06:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:27.421 MallocForNvmf1 00:05:27.421 06:51:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:27.421 06:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:27.678 [2024-07-13 06:51:35.510388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.678 06:51:35 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.678 06:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.936 06:51:35 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.936 06:51:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:28.194 06:51:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:28.194 06:51:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:28.451 06:51:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:28.451 06:51:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:28.710 [2024-07-13 06:51:36.555067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:28.710 06:51:36 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:28.710 06:51:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.710 06:51:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.710 06:51:36 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:28.710 06:51:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.710 06:51:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.710 06:51:36 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:28.710 06:51:36 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.710 06:51:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.968 MallocBdevForConfigChangeCheck 00:05:28.968 06:51:36 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:28.968 06:51:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.968 06:51:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.968 06:51:36 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:28.968 06:51:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.534 INFO: shutting down applications... 00:05:29.534 06:51:37 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:29.534 06:51:37 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:29.534 06:51:37 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:29.534 06:51:37 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:29.534 06:51:37 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:29.794 Calling clear_iscsi_subsystem 00:05:29.794 Calling clear_nvmf_subsystem 00:05:29.794 Calling clear_nbd_subsystem 00:05:29.794 Calling clear_ublk_subsystem 00:05:29.794 Calling clear_vhost_blk_subsystem 00:05:29.794 Calling clear_vhost_scsi_subsystem 00:05:29.794 Calling clear_bdev_subsystem 00:05:29.794 06:51:37 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:29.794 06:51:37 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:29.794 06:51:37 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:29.794 06:51:37 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.794 06:51:37 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:29.794 06:51:37 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:30.088 06:51:38 json_config -- json_config/json_config.sh@345 -- # break 00:05:30.088 06:51:38 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:30.088 06:51:38 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:30.088 06:51:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:30.088 06:51:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:30.088 06:51:38 json_config -- json_config/common.sh@35 -- # [[ -n 73676 ]] 00:05:30.088 06:51:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73676 00:05:30.088 06:51:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:30.088 06:51:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.088 06:51:38 json_config -- json_config/common.sh@41 -- # kill -0 73676 00:05:30.088 06:51:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.692 06:51:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.692 06:51:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.692 06:51:38 json_config -- json_config/common.sh@41 -- # kill -0 73676 00:05:30.692 06:51:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.692 06:51:38 json_config -- json_config/common.sh@43 -- # break 00:05:30.692 06:51:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.692 SPDK target shutdown done 00:05:30.692 06:51:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.692 INFO: relaunching applications... 00:05:30.692 06:51:38 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:30.692 06:51:38 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.692 06:51:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.692 06:51:38 json_config -- json_config/common.sh@10 -- # shift 00:05:30.692 06:51:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.692 06:51:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.692 06:51:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.692 06:51:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.692 06:51:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.692 06:51:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73956 00:05:30.692 06:51:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.692 Waiting for target to run... 00:05:30.692 06:51:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.692 06:51:38 json_config -- json_config/common.sh@25 -- # waitforlisten 73956 /var/tmp/spdk_tgt.sock 00:05:30.692 06:51:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 73956 ']' 00:05:30.692 06:51:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.692 06:51:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.692 06:51:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.692 06:51:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.692 06:51:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.692 [2024-07-13 06:51:38.615186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:30.692 [2024-07-13 06:51:38.615300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73956 ] 00:05:31.259 [2024-07-13 06:51:39.119558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.259 [2024-07-13 06:51:39.212899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.517 [2024-07-13 06:51:39.543407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.517 [2024-07-13 06:51:39.575522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:31.775 06:51:39 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.775 06:51:39 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:31.775 00:05:31.775 06:51:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:31.775 06:51:39 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:31.775 INFO: Checking if target configuration is the same... 00:05:31.775 06:51:39 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:31.775 06:51:39 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.775 06:51:39 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:31.775 06:51:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.775 + '[' 2 -ne 2 ']' 00:05:31.775 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:31.775 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:31.775 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:31.775 +++ basename /dev/fd/62 00:05:31.775 ++ mktemp /tmp/62.XXX 00:05:31.775 + tmp_file_1=/tmp/62.gWi 00:05:31.775 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.775 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:31.775 + tmp_file_2=/tmp/spdk_tgt_config.json.lA4 00:05:31.775 + ret=0 00:05:31.775 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.033 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.033 + diff -u /tmp/62.gWi /tmp/spdk_tgt_config.json.lA4 00:05:32.033 INFO: JSON config files are the same 00:05:32.033 + echo 'INFO: JSON config files are the same' 00:05:32.033 + rm /tmp/62.gWi /tmp/spdk_tgt_config.json.lA4 00:05:32.033 + exit 0 00:05:32.033 06:51:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:32.033 06:51:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.033 INFO: changing configuration and checking if this can be detected... 00:05:32.033 06:51:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.033 06:51:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.599 06:51:40 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.599 06:51:40 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:32.599 06:51:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.599 + '[' 2 -ne 2 ']' 00:05:32.599 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:32.599 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:32.599 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:32.599 +++ basename /dev/fd/62 00:05:32.599 ++ mktemp /tmp/62.XXX 00:05:32.599 + tmp_file_1=/tmp/62.sYd 00:05:32.599 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.599 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.599 + tmp_file_2=/tmp/spdk_tgt_config.json.2ws 00:05:32.599 + ret=0 00:05:32.599 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.857 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.857 + diff -u /tmp/62.sYd /tmp/spdk_tgt_config.json.2ws 00:05:32.857 + ret=1 00:05:32.857 + echo '=== Start of file: /tmp/62.sYd ===' 00:05:32.857 + cat /tmp/62.sYd 00:05:32.857 + echo '=== End of file: /tmp/62.sYd ===' 00:05:32.857 + echo '' 00:05:32.857 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2ws ===' 00:05:32.857 + cat /tmp/spdk_tgt_config.json.2ws 00:05:32.857 + echo '=== End of file: /tmp/spdk_tgt_config.json.2ws ===' 00:05:32.857 + echo '' 00:05:32.857 + rm /tmp/62.sYd /tmp/spdk_tgt_config.json.2ws 00:05:32.857 + exit 1 00:05:32.857 INFO: configuration change detected. 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@317 -- # [[ -n 73956 ]] 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.857 06:51:40 json_config -- json_config/json_config.sh@323 -- # killprocess 73956 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@948 -- # '[' -z 73956 ']' 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@952 -- # kill -0 73956 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@953 -- # uname 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73956 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73956' 00:05:32.857 killing process with pid 73956 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@967 -- # kill 73956 00:05:32.857 06:51:40 json_config -- common/autotest_common.sh@972 -- # wait 73956 00:05:33.423 06:51:41 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.423 06:51:41 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:33.423 06:51:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.423 06:51:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.423 06:51:41 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:33.423 INFO: Success 00:05:33.423 06:51:41 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:33.423 00:05:33.423 real 0m8.490s 00:05:33.423 user 0m11.973s 00:05:33.423 sys 0m1.995s 00:05:33.423 06:51:41 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.423 06:51:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.423 ************************************ 00:05:33.423 END TEST json_config 00:05:33.423 ************************************ 00:05:33.423 06:51:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.423 06:51:41 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.423 06:51:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.423 06:51:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.423 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:33.423 ************************************ 00:05:33.423 START TEST json_config_extra_key 00:05:33.423 ************************************ 00:05:33.423 06:51:41 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:33.423 06:51:41 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.423 06:51:41 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.423 06:51:41 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.423 06:51:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.423 06:51:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.423 06:51:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.423 06:51:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:33.423 06:51:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.423 06:51:41 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:33.423 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.423 INFO: launching applications... 00:05:33.424 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:33.424 06:51:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74132 00:05:33.424 Waiting for target to run... 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74132 /var/tmp/spdk_tgt.sock 00:05:33.424 06:51:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:33.424 06:51:41 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 74132 ']' 00:05:33.424 06:51:41 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.424 06:51:41 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.424 06:51:41 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.424 06:51:41 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.424 06:51:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.424 [2024-07-13 06:51:41.461360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:33.424 [2024-07-13 06:51:41.461478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74132 ] 00:05:33.990 [2024-07-13 06:51:41.968426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.990 [2024-07-13 06:51:42.055988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.555 06:51:42 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.555 00:05:34.555 06:51:42 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:34.555 INFO: shutting down applications... 00:05:34.555 06:51:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:34.555 06:51:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74132 ]] 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74132 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74132 00:05:34.555 06:51:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.121 06:51:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.121 06:51:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.121 06:51:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74132 00:05:35.121 06:51:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74132 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.379 06:51:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.379 SPDK target shutdown done 00:05:35.379 Success 00:05:35.379 06:51:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.379 00:05:35.379 real 0m2.123s 00:05:35.379 user 0m1.572s 00:05:35.379 sys 0m0.533s 00:05:35.379 06:51:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.379 ************************************ 00:05:35.379 06:51:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.379 END TEST json_config_extra_key 00:05:35.379 ************************************ 00:05:35.636 06:51:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.636 06:51:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.636 06:51:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.637 06:51:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.637 06:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.637 ************************************ 00:05:35.637 START TEST alias_rpc 00:05:35.637 ************************************ 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.637 * Looking for test storage... 00:05:35.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:35.637 06:51:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.637 06:51:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74215 00:05:35.637 06:51:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74215 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 74215 ']' 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.637 06:51:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.637 06:51:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.637 [2024-07-13 06:51:43.636509] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:35.637 [2024-07-13 06:51:43.636642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74215 ] 00:05:35.894 [2024-07-13 06:51:43.774302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.894 [2024-07-13 06:51:43.890855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.829 06:51:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.829 06:51:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74215 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 74215 ']' 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 74215 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74215 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.829 killing process with pid 74215 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74215' 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@967 -- # kill 74215 00:05:36.829 06:51:44 alias_rpc -- common/autotest_common.sh@972 -- # wait 74215 00:05:37.398 00:05:37.398 real 0m1.883s 00:05:37.398 user 0m2.011s 00:05:37.398 sys 0m0.537s 00:05:37.398 06:51:45 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.398 06:51:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.398 ************************************ 00:05:37.398 END TEST alias_rpc 00:05:37.398 ************************************ 00:05:37.398 06:51:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.398 06:51:45 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:37.398 06:51:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.398 06:51:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.398 06:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.398 06:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:37.398 ************************************ 00:05:37.398 START TEST dpdk_mem_utility 00:05:37.398 ************************************ 00:05:37.398 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.657 * Looking for test storage... 00:05:37.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:37.657 06:51:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.657 06:51:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74307 00:05:37.657 06:51:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.657 06:51:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74307 00:05:37.657 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 74307 ']' 00:05:37.657 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.657 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.657 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.657 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.657 06:51:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.657 [2024-07-13 06:51:45.578901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:37.657 [2024-07-13 06:51:45.579026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74307 ] 00:05:37.657 [2024-07-13 06:51:45.711711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.958 [2024-07-13 06:51:45.782443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.525 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.525 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:38.525 06:51:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.525 06:51:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.525 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.525 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 { 00:05:38.525 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.525 } 00:05:38.525 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.525 06:51:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.525 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:38.525 1 heaps totaling size 814.000000 MiB 00:05:38.525 size: 814.000000 MiB heap id: 0 00:05:38.525 end heaps---------- 00:05:38.525 8 mempools totaling size 598.116089 MiB 00:05:38.525 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.525 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.525 size: 84.521057 MiB name: bdev_io_74307 00:05:38.525 size: 51.011292 MiB name: evtpool_74307 00:05:38.525 size: 50.003479 MiB name: msgpool_74307 00:05:38.525 size: 21.763794 MiB name: PDU_Pool 00:05:38.525 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.525 size: 0.026123 MiB name: Session_Pool 00:05:38.525 end mempools------- 00:05:38.525 6 memzones totaling size 4.142822 MiB 00:05:38.525 size: 1.000366 MiB name: RG_ring_0_74307 00:05:38.525 size: 1.000366 MiB name: RG_ring_1_74307 00:05:38.525 size: 1.000366 MiB name: RG_ring_4_74307 00:05:38.525 size: 1.000366 MiB name: RG_ring_5_74307 00:05:38.525 size: 0.125366 MiB name: RG_ring_2_74307 00:05:38.525 size: 0.015991 MiB name: RG_ring_3_74307 00:05:38.525 end memzones------- 00:05:38.525 06:51:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.785 heap id: 0 total size: 814.000000 MiB number of busy elements: 241 number of free elements: 15 00:05:38.785 list of free elements. size: 12.482727 MiB 00:05:38.785 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:38.785 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:38.785 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:38.785 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:38.785 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:38.785 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:38.785 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:38.785 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:38.785 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:38.785 element at address: 0x20001aa00000 with size: 0.570251 MiB 00:05:38.785 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:38.785 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:38.785 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:38.785 element at address: 0x200027e00000 with size: 0.397949 MiB 00:05:38.785 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:38.785 list of standard malloc elements. size: 199.254700 MiB 00:05:38.785 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:38.785 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:38.785 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:38.785 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:38.785 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:38.785 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:38.785 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:38.785 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:38.785 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:38.785 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:38.785 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:38.786 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:38.786 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:38.787 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:38.787 list of memzone associated elements. size: 602.262573 MiB 00:05:38.787 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:38.787 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.787 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:38.787 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.787 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:38.787 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74307_0 00:05:38.787 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:38.787 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74307_0 00:05:38.787 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:38.787 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74307_0 00:05:38.787 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:38.787 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.787 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:38.787 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.787 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:38.787 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74307 00:05:38.787 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:38.787 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74307 00:05:38.787 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:38.787 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74307 00:05:38.787 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:38.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.787 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:38.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.787 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:38.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.787 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:38.787 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.787 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:38.787 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74307 00:05:38.787 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:38.787 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74307 00:05:38.787 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:38.787 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74307 00:05:38.787 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:38.787 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74307 00:05:38.787 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:38.787 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74307 00:05:38.787 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:38.787 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.787 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:38.787 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.787 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:38.787 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.787 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:38.787 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74307 00:05:38.787 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:38.787 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.787 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:05:38.787 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.787 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:38.787 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74307 00:05:38.787 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:05:38.787 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.787 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:38.787 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74307 00:05:38.787 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:38.787 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74307 00:05:38.787 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:05:38.787 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.787 06:51:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.787 06:51:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74307 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 74307 ']' 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 74307 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74307 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.787 killing process with pid 74307 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74307' 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 74307 00:05:38.787 06:51:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 74307 00:05:39.044 00:05:39.044 real 0m1.695s 00:05:39.044 user 0m1.695s 00:05:39.044 sys 0m0.507s 00:05:39.044 06:51:47 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.044 06:51:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.044 ************************************ 00:05:39.044 END TEST dpdk_mem_utility 00:05:39.044 ************************************ 00:05:39.301 06:51:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.301 06:51:47 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.301 06:51:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.301 06:51:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.301 06:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.301 ************************************ 00:05:39.301 START TEST event 00:05:39.301 ************************************ 00:05:39.301 06:51:47 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.301 * Looking for test storage... 00:05:39.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:39.301 06:51:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:39.301 06:51:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.301 06:51:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.301 06:51:47 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:39.301 06:51:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.301 06:51:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.301 ************************************ 00:05:39.301 START TEST event_perf 00:05:39.301 ************************************ 00:05:39.301 06:51:47 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.301 Running I/O for 1 seconds...[2024-07-13 06:51:47.282407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:39.301 [2024-07-13 06:51:47.282496] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74397 ] 00:05:39.559 [2024-07-13 06:51:47.418270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.559 [2024-07-13 06:51:47.489835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.559 [2024-07-13 06:51:47.490003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.559 [2024-07-13 06:51:47.490245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.559 [2024-07-13 06:51:47.490248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.933 Running I/O for 1 seconds... 00:05:40.933 lcore 0: 167381 00:05:40.933 lcore 1: 167382 00:05:40.933 lcore 2: 167381 00:05:40.933 lcore 3: 167380 00:05:40.933 done. 00:05:40.933 ************************************ 00:05:40.933 END TEST event_perf 00:05:40.933 ************************************ 00:05:40.933 00:05:40.933 real 0m1.327s 00:05:40.933 user 0m4.140s 00:05:40.933 sys 0m0.068s 00:05:40.933 06:51:48 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.933 06:51:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.933 06:51:48 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.933 06:51:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.933 06:51:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:40.933 06:51:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.933 06:51:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.933 ************************************ 00:05:40.933 START TEST event_reactor 00:05:40.933 ************************************ 00:05:40.933 06:51:48 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.933 [2024-07-13 06:51:48.661828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:40.933 [2024-07-13 06:51:48.661923] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74435 ] 00:05:40.933 [2024-07-13 06:51:48.793165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.933 [2024-07-13 06:51:48.855976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.866 test_start 00:05:41.866 oneshot 00:05:41.866 tick 100 00:05:41.866 tick 100 00:05:41.866 tick 250 00:05:41.866 tick 100 00:05:41.866 tick 100 00:05:41.866 tick 250 00:05:41.866 tick 100 00:05:41.866 tick 500 00:05:41.866 tick 100 00:05:41.866 tick 100 00:05:41.866 tick 250 00:05:41.866 tick 100 00:05:41.866 tick 100 00:05:41.866 test_end 00:05:41.866 00:05:41.866 real 0m1.290s 00:05:41.866 user 0m1.122s 00:05:41.866 sys 0m0.062s 00:05:41.866 06:51:49 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.866 06:51:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:41.866 ************************************ 00:05:41.866 END TEST event_reactor 00:05:41.866 ************************************ 00:05:42.123 06:51:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.123 06:51:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.123 06:51:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:42.123 06:51:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.123 06:51:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.123 ************************************ 00:05:42.123 START TEST event_reactor_perf 00:05:42.123 ************************************ 00:05:42.123 06:51:49 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.123 [2024-07-13 06:51:50.012966] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:42.123 [2024-07-13 06:51:50.013069] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74465 ] 00:05:42.123 [2024-07-13 06:51:50.147522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.380 [2024-07-13 06:51:50.255649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.312 test_start 00:05:43.312 test_end 00:05:43.312 Performance: 460491 events per second 00:05:43.312 00:05:43.312 real 0m1.355s 00:05:43.312 user 0m1.186s 00:05:43.312 sys 0m0.063s 00:05:43.312 06:51:51 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.312 06:51:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.312 ************************************ 00:05:43.312 END TEST event_reactor_perf 00:05:43.312 ************************************ 00:05:43.570 06:51:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:43.570 06:51:51 event -- event/event.sh@49 -- # uname -s 00:05:43.570 06:51:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.570 06:51:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:43.570 06:51:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.570 06:51:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.570 06:51:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.570 ************************************ 00:05:43.570 START TEST event_scheduler 00:05:43.570 ************************************ 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:43.571 * Looking for test storage... 00:05:43.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:43.571 06:51:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.571 06:51:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74527 00:05:43.571 06:51:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.571 06:51:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.571 06:51:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74527 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 74527 ']' 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.571 06:51:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.571 [2024-07-13 06:51:51.537785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:43.571 [2024-07-13 06:51:51.537858] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74527 ] 00:05:43.828 [2024-07-13 06:51:51.672724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.828 [2024-07-13 06:51:51.769474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.828 [2024-07-13 06:51:51.769653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.828 [2024-07-13 06:51:51.769768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.828 [2024-07-13 06:51:51.769772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:44.763 06:51:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.763 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.763 POWER: Cannot set governor of lcore 0 to performance 00:05:44.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.763 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.763 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.763 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:44.763 POWER: Unable to set Power Management Environment for lcore 0 00:05:44.763 [2024-07-13 06:51:52.575096] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:44.763 [2024-07-13 06:51:52.575111] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:44.763 [2024-07-13 06:51:52.575119] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:44.763 [2024-07-13 06:51:52.575131] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:44.763 [2024-07-13 06:51:52.575139] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:44.763 [2024-07-13 06:51:52.575146] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 [2024-07-13 06:51:52.668221] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 ************************************ 00:05:44.763 START TEST scheduler_create_thread 00:05:44.763 ************************************ 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 2 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 3 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 4 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 5 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 6 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 7 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 8 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 9 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 10 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.763 06:51:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.661 06:51:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.661 06:51:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.661 06:51:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.661 06:51:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.661 06:51:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.596 06:51:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.596 00:05:47.596 real 0m2.617s 00:05:47.596 user 0m0.013s 00:05:47.596 sys 0m0.008s 00:05:47.596 06:51:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.596 06:51:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.596 ************************************ 00:05:47.596 END TEST scheduler_create_thread 00:05:47.596 ************************************ 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:47.596 06:51:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:47.596 06:51:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74527 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 74527 ']' 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 74527 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74527 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:47.596 killing process with pid 74527 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74527' 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 74527 00:05:47.596 06:51:55 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 74527 00:05:47.854 [2024-07-13 06:51:55.780393] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.112 00:05:48.112 real 0m4.578s 00:05:48.112 user 0m8.869s 00:05:48.112 sys 0m0.369s 00:05:48.112 06:51:55 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.112 ************************************ 00:05:48.112 END TEST event_scheduler 00:05:48.112 06:51:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.112 ************************************ 00:05:48.112 06:51:56 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.112 06:51:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.112 06:51:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.112 06:51:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.112 06:51:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.112 06:51:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.112 ************************************ 00:05:48.112 START TEST app_repeat 00:05:48.112 ************************************ 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74644 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.112 Process app_repeat pid: 74644 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74644' 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.112 spdk_app_start Round 0 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.112 06:51:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74644 /var/tmp/spdk-nbd.sock 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74644 ']' 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.112 06:51:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.112 [2024-07-13 06:51:56.061304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:48.112 [2024-07-13 06:51:56.061381] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74644 ] 00:05:48.388 [2024-07-13 06:51:56.193503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.388 [2024-07-13 06:51:56.295911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.388 [2024-07-13 06:51:56.295934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.388 06:51:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.388 06:51:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:48.388 06:51:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.653 Malloc0 00:05:48.653 06:51:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.219 Malloc1 00:05:49.219 06:51:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.219 /dev/nbd0 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.219 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.219 06:51:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.219 06:51:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:49.219 06:51:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.219 06:51:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.219 06:51:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.219 06:51:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.220 1+0 records in 00:05:49.220 1+0 records out 00:05:49.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271957 s, 15.1 MB/s 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.220 06:51:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:49.220 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.220 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.220 06:51:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.479 /dev/nbd1 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.479 1+0 records in 00:05:49.479 1+0 records out 00:05:49.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496553 s, 8.2 MB/s 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.479 06:51:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.479 06:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.047 { 00:05:50.047 "bdev_name": "Malloc0", 00:05:50.047 "nbd_device": "/dev/nbd0" 00:05:50.047 }, 00:05:50.047 { 00:05:50.047 "bdev_name": "Malloc1", 00:05:50.047 "nbd_device": "/dev/nbd1" 00:05:50.047 } 00:05:50.047 ]' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.047 { 00:05:50.047 "bdev_name": "Malloc0", 00:05:50.047 "nbd_device": "/dev/nbd0" 00:05:50.047 }, 00:05:50.047 { 00:05:50.047 "bdev_name": "Malloc1", 00:05:50.047 "nbd_device": "/dev/nbd1" 00:05:50.047 } 00:05:50.047 ]' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.047 /dev/nbd1' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.047 /dev/nbd1' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.047 256+0 records in 00:05:50.047 256+0 records out 00:05:50.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00730836 s, 143 MB/s 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.047 256+0 records in 00:05:50.047 256+0 records out 00:05:50.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0406308 s, 25.8 MB/s 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.047 256+0 records in 00:05:50.047 256+0 records out 00:05:50.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027492 s, 38.1 MB/s 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.047 06:51:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.306 06:51:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.565 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.565 06:51:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.565 06:51:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.565 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.565 06:51:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.566 06:51:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.566 06:51:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.566 06:51:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.566 06:51:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.566 06:51:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.566 06:51:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.824 06:51:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.824 06:51:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.083 06:51:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.341 [2024-07-13 06:51:59.372208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.599 [2024-07-13 06:51:59.448819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.599 [2024-07-13 06:51:59.448841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.599 [2024-07-13 06:51:59.526254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.599 [2024-07-13 06:51:59.526336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.131 06:52:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.131 spdk_app_start Round 1 00:05:54.131 06:52:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.131 06:52:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74644 /var/tmp/spdk-nbd.sock 00:05:54.131 06:52:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74644 ']' 00:05:54.131 06:52:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.131 06:52:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.131 06:52:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.131 06:52:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.131 06:52:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.389 06:52:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.389 06:52:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.389 06:52:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.646 Malloc0 00:05:54.646 06:52:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.920 Malloc1 00:05:54.921 06:52:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.921 06:52:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.181 /dev/nbd0 00:05:55.181 06:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.181 06:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.181 1+0 records in 00:05:55.181 1+0 records out 00:05:55.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185299 s, 22.1 MB/s 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.181 06:52:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.181 06:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.181 06:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.181 06:52:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.439 /dev/nbd1 00:05:55.696 06:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.696 06:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.696 06:52:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.697 06:52:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.697 1+0 records in 00:05:55.697 1+0 records out 00:05:55.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322598 s, 12.7 MB/s 00:05:55.697 06:52:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.697 06:52:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.697 06:52:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.697 06:52:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.697 06:52:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.697 06:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.697 06:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.697 06:52:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.697 06:52:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.697 06:52:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.955 { 00:05:55.955 "bdev_name": "Malloc0", 00:05:55.955 "nbd_device": "/dev/nbd0" 00:05:55.955 }, 00:05:55.955 { 00:05:55.955 "bdev_name": "Malloc1", 00:05:55.955 "nbd_device": "/dev/nbd1" 00:05:55.955 } 00:05:55.955 ]' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.955 { 00:05:55.955 "bdev_name": "Malloc0", 00:05:55.955 "nbd_device": "/dev/nbd0" 00:05:55.955 }, 00:05:55.955 { 00:05:55.955 "bdev_name": "Malloc1", 00:05:55.955 "nbd_device": "/dev/nbd1" 00:05:55.955 } 00:05:55.955 ]' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.955 /dev/nbd1' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.955 /dev/nbd1' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.955 256+0 records in 00:05:55.955 256+0 records out 00:05:55.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00900336 s, 116 MB/s 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.955 256+0 records in 00:05:55.955 256+0 records out 00:05:55.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257105 s, 40.8 MB/s 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.955 256+0 records in 00:05:55.955 256+0 records out 00:05:55.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313826 s, 33.4 MB/s 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.955 06:52:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.956 06:52:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.215 06:52:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.473 06:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.041 06:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.042 06:52:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.042 06:52:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.301 06:52:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.559 [2024-07-13 06:52:05.460732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.559 [2024-07-13 06:52:05.519176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.559 [2024-07-13 06:52:05.519197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.559 [2024-07-13 06:52:05.591305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.559 [2024-07-13 06:52:05.591381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.894 06:52:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.894 spdk_app_start Round 2 00:06:00.894 06:52:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:00.894 06:52:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74644 /var/tmp/spdk-nbd.sock 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74644 ']' 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.894 06:52:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.894 06:52:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.894 Malloc0 00:06:00.894 06:52:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.894 Malloc1 00:06:01.152 06:52:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.152 06:52:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.152 /dev/nbd0 00:06:01.152 06:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.152 06:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.152 06:52:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.152 06:52:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.153 1+0 records in 00:06:01.153 1+0 records out 00:06:01.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576464 s, 7.1 MB/s 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.153 06:52:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.153 06:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.153 06:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.153 06:52:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.411 /dev/nbd1 00:06:01.411 06:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.411 06:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.411 1+0 records in 00:06:01.411 1+0 records out 00:06:01.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277398 s, 14.8 MB/s 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.411 06:52:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.411 06:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.411 06:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.411 06:52:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.412 06:52:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.412 06:52:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.670 06:52:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.670 { 00:06:01.670 "bdev_name": "Malloc0", 00:06:01.670 "nbd_device": "/dev/nbd0" 00:06:01.670 }, 00:06:01.670 { 00:06:01.670 "bdev_name": "Malloc1", 00:06:01.670 "nbd_device": "/dev/nbd1" 00:06:01.670 } 00:06:01.670 ]' 00:06:01.670 06:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.670 { 00:06:01.670 "bdev_name": "Malloc0", 00:06:01.670 "nbd_device": "/dev/nbd0" 00:06:01.670 }, 00:06:01.670 { 00:06:01.670 "bdev_name": "Malloc1", 00:06:01.670 "nbd_device": "/dev/nbd1" 00:06:01.670 } 00:06:01.670 ]' 00:06:01.670 06:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.928 06:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.928 /dev/nbd1' 00:06:01.928 06:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.928 /dev/nbd1' 00:06:01.928 06:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.928 06:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.929 256+0 records in 00:06:01.929 256+0 records out 00:06:01.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657893 s, 159 MB/s 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.929 256+0 records in 00:06:01.929 256+0 records out 00:06:01.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021989 s, 47.7 MB/s 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.929 256+0 records in 00:06:01.929 256+0 records out 00:06:01.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249372 s, 42.0 MB/s 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.929 06:52:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.187 06:52:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.188 06:52:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.188 06:52:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.188 06:52:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.447 06:52:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.706 06:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.964 06:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.964 06:52:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.964 06:52:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.964 06:52:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.964 06:52:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.964 06:52:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.222 06:52:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.480 [2024-07-13 06:52:11.350034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.480 [2024-07-13 06:52:11.428329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.480 [2024-07-13 06:52:11.428351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.480 [2024-07-13 06:52:11.500110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.480 [2024-07-13 06:52:11.500178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.763 06:52:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74644 /var/tmp/spdk-nbd.sock 00:06:06.763 06:52:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 74644 ']' 00:06:06.763 06:52:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.763 06:52:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.763 06:52:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.764 06:52:14 event.app_repeat -- event/event.sh@39 -- # killprocess 74644 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 74644 ']' 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 74644 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74644 00:06:06.764 killing process with pid 74644 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74644' 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@967 -- # kill 74644 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@972 -- # wait 74644 00:06:06.764 spdk_app_start is called in Round 0. 00:06:06.764 Shutdown signal received, stop current app iteration 00:06:06.764 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:06.764 spdk_app_start is called in Round 1. 00:06:06.764 Shutdown signal received, stop current app iteration 00:06:06.764 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:06.764 spdk_app_start is called in Round 2. 00:06:06.764 Shutdown signal received, stop current app iteration 00:06:06.764 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:06.764 spdk_app_start is called in Round 3. 00:06:06.764 Shutdown signal received, stop current app iteration 00:06:06.764 06:52:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.764 06:52:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.764 00:06:06.764 real 0m18.590s 00:06:06.764 user 0m41.506s 00:06:06.764 sys 0m3.120s 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.764 06:52:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 ************************************ 00:06:06.764 END TEST app_repeat 00:06:06.764 ************************************ 00:06:06.764 06:52:14 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.764 06:52:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.764 06:52:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.764 06:52:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.764 06:52:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.764 06:52:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 ************************************ 00:06:06.764 START TEST cpu_locks 00:06:06.764 ************************************ 00:06:06.764 06:52:14 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.764 * Looking for test storage... 00:06:06.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.764 06:52:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.764 06:52:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.764 06:52:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.764 06:52:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.764 06:52:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.764 06:52:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.764 06:52:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 ************************************ 00:06:06.764 START TEST default_locks 00:06:06.764 ************************************ 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75257 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75257 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75257 ']' 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.764 06:52:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.022 [2024-07-13 06:52:14.837717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:07.022 [2024-07-13 06:52:14.837843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75257 ] 00:06:07.022 [2024-07-13 06:52:14.969852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.022 [2024-07-13 06:52:15.038255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.955 06:52:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.955 06:52:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:07.955 06:52:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75257 00:06:07.955 06:52:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75257 00:06:07.955 06:52:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75257 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 75257 ']' 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 75257 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75257 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.213 killing process with pid 75257 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75257' 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 75257 00:06:08.213 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 75257 00:06:08.780 06:52:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75257 00:06:08.780 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:08.780 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75257 00:06:08.780 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:08.780 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75257 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75257 ']' 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.781 ERROR: process (pid: 75257) is no longer running 00:06:08.781 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75257) - No such process 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:08.781 ************************************ 00:06:08.781 END TEST default_locks 00:06:08.781 ************************************ 00:06:08.781 00:06:08.781 real 0m1.965s 00:06:08.781 user 0m1.981s 00:06:08.781 sys 0m0.639s 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.781 06:52:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.781 06:52:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:08.781 06:52:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:08.781 06:52:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.781 06:52:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.781 06:52:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.781 ************************************ 00:06:08.781 START TEST default_locks_via_rpc 00:06:08.781 ************************************ 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75321 00:06:08.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75321 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75321 ']' 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.781 06:52:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.781 [2024-07-13 06:52:16.851411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:08.781 [2024-07-13 06:52:16.851533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75321 ] 00:06:09.039 [2024-07-13 06:52:16.990241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.039 [2024-07-13 06:52:17.072550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75321 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75321 00:06:09.973 06:52:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75321 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 75321 ']' 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 75321 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75321 00:06:10.540 killing process with pid 75321 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75321' 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 75321 00:06:10.540 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 75321 00:06:10.823 ************************************ 00:06:10.824 END TEST default_locks_via_rpc 00:06:10.824 ************************************ 00:06:10.824 00:06:10.824 real 0m2.067s 00:06:10.824 user 0m2.160s 00:06:10.824 sys 0m0.659s 00:06:10.824 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.824 06:52:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.082 06:52:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.082 06:52:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.082 06:52:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.082 06:52:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.082 06:52:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.082 ************************************ 00:06:11.082 START TEST non_locking_app_on_locked_coremask 00:06:11.082 ************************************ 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75392 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75392 /var/tmp/spdk.sock 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75392 ']' 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.082 06:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.082 [2024-07-13 06:52:18.975709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:11.082 [2024-07-13 06:52:18.975827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75392 ] 00:06:11.082 [2024-07-13 06:52:19.113863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.340 [2024-07-13 06:52:19.202755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75426 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75426 /var/tmp/spdk2.sock 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75426 ']' 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.906 06:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.166 [2024-07-13 06:52:20.005387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:12.166 [2024-07-13 06:52:20.006356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75426 ] 00:06:12.166 [2024-07-13 06:52:20.143927] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.166 [2024-07-13 06:52:20.143976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.423 [2024-07-13 06:52:20.287910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.988 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.988 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.988 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75392 00:06:12.988 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75392 00:06:12.988 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75392 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75392 ']' 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75392 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75392 00:06:13.922 killing process with pid 75392 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75392' 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75392 00:06:13.922 06:52:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75392 00:06:14.856 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75426 00:06:14.856 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75426 ']' 00:06:14.856 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75426 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75426 00:06:14.857 killing process with pid 75426 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75426' 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75426 00:06:14.857 06:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75426 00:06:15.421 00:06:15.421 real 0m4.381s 00:06:15.421 user 0m4.661s 00:06:15.421 sys 0m1.237s 00:06:15.421 06:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.421 ************************************ 00:06:15.421 END TEST non_locking_app_on_locked_coremask 00:06:15.421 ************************************ 00:06:15.421 06:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.421 06:52:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.421 06:52:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.421 06:52:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.421 06:52:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.421 06:52:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.421 ************************************ 00:06:15.421 START TEST locking_app_on_unlocked_coremask 00:06:15.421 ************************************ 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75505 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75505 /var/tmp/spdk.sock 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75505 ']' 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.421 06:52:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.421 [2024-07-13 06:52:23.424442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:15.421 [2024-07-13 06:52:23.424749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75505 ] 00:06:15.679 [2024-07-13 06:52:23.563076] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.679 [2024-07-13 06:52:23.563128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.679 [2024-07-13 06:52:23.666795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75535 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75535 /var/tmp/spdk2.sock 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75535 ']' 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.613 06:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.613 [2024-07-13 06:52:24.422318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:16.613 [2024-07-13 06:52:24.422805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75535 ] 00:06:16.613 [2024-07-13 06:52:24.568416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.871 [2024-07-13 06:52:24.786984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.437 06:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.437 06:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.437 06:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75535 00:06:17.437 06:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.437 06:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75535 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75505 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75505 ']' 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75505 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75505 00:06:18.369 killing process with pid 75505 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75505' 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75505 00:06:18.369 06:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75505 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75535 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75535 ']' 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75535 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75535 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.302 killing process with pid 75535 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75535' 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75535 00:06:19.302 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75535 00:06:19.866 00:06:19.866 real 0m4.290s 00:06:19.866 user 0m4.590s 00:06:19.866 sys 0m1.276s 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.866 ************************************ 00:06:19.866 END TEST locking_app_on_unlocked_coremask 00:06:19.866 ************************************ 00:06:19.866 06:52:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.866 06:52:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.866 06:52:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.866 06:52:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.866 06:52:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.866 ************************************ 00:06:19.866 START TEST locking_app_on_locked_coremask 00:06:19.866 ************************************ 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75614 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75614 /var/tmp/spdk.sock 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75614 ']' 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.866 06:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.866 [2024-07-13 06:52:27.746302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:19.867 [2024-07-13 06:52:27.746395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75614 ] 00:06:19.867 [2024-07-13 06:52:27.877525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.123 [2024-07-13 06:52:27.949535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75642 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75642 /var/tmp/spdk2.sock 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75642 /var/tmp/spdk2.sock 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75642 /var/tmp/spdk2.sock 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75642 ']' 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.689 06:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.947 [2024-07-13 06:52:28.796965] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:20.947 [2024-07-13 06:52:28.797077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75642 ] 00:06:20.947 [2024-07-13 06:52:28.939027] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75614 has claimed it. 00:06:20.947 [2024-07-13 06:52:28.939114] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.515 ERROR: process (pid: 75642) is no longer running 00:06:21.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75642) - No such process 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75614 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75614 00:06:21.515 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75614 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75614 ']' 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75614 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75614 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.779 killing process with pid 75614 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75614' 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75614 00:06:21.779 06:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75614 00:06:22.357 00:06:22.357 real 0m2.626s 00:06:22.357 user 0m2.929s 00:06:22.357 sys 0m0.669s 00:06:22.357 06:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.357 06:52:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.357 ************************************ 00:06:22.357 END TEST locking_app_on_locked_coremask 00:06:22.357 ************************************ 00:06:22.357 06:52:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.357 06:52:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:22.357 06:52:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.357 06:52:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.357 06:52:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.357 ************************************ 00:06:22.357 START TEST locking_overlapped_coremask 00:06:22.357 ************************************ 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75699 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75699 /var/tmp/spdk.sock 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75699 ']' 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.357 06:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.615 [2024-07-13 06:52:30.435752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:22.615 [2024-07-13 06:52:30.435847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75699 ] 00:06:22.615 [2024-07-13 06:52:30.573007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.615 [2024-07-13 06:52:30.656236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.615 [2024-07-13 06:52:30.656388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.615 [2024-07-13 06:52:30.656397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75729 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75729 /var/tmp/spdk2.sock 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75729 /var/tmp/spdk2.sock 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:23.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75729 /var/tmp/spdk2.sock 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75729 ']' 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.552 06:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.552 [2024-07-13 06:52:31.505402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:23.552 [2024-07-13 06:52:31.505545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:06:23.810 [2024-07-13 06:52:31.671798] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75699 has claimed it. 00:06:23.810 [2024-07-13 06:52:31.671856] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.377 ERROR: process (pid: 75729) is no longer running 00:06:24.377 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75729) - No such process 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.377 06:52:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75699 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 75699 ']' 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 75699 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75699 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75699' 00:06:24.378 killing process with pid 75699 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 75699 00:06:24.378 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 75699 00:06:24.942 00:06:24.942 real 0m2.425s 00:06:24.942 user 0m6.810s 00:06:24.942 sys 0m0.542s 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 ************************************ 00:06:24.942 END TEST locking_overlapped_coremask 00:06:24.942 ************************************ 00:06:24.942 06:52:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.942 06:52:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:24.942 06:52:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.942 06:52:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.942 06:52:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 ************************************ 00:06:24.942 START TEST locking_overlapped_coremask_via_rpc 00:06:24.942 ************************************ 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75775 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75775 /var/tmp/spdk.sock 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75775 ']' 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.942 06:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 [2024-07-13 06:52:32.910587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:24.942 [2024-07-13 06:52:32.910694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75775 ] 00:06:25.199 [2024-07-13 06:52:33.048423] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.199 [2024-07-13 06:52:33.048472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.199 [2024-07-13 06:52:33.140498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.199 [2024-07-13 06:52:33.140674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.199 [2024-07-13 06:52:33.140687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75805 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75805 /var/tmp/spdk2.sock 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75805 ']' 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.765 06:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 [2024-07-13 06:52:33.893891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:26.023 [2024-07-13 06:52:33.894229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75805 ] 00:06:26.023 [2024-07-13 06:52:34.040015] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.023 [2024-07-13 06:52:34.040064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.280 [2024-07-13 06:52:34.209353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.281 [2024-07-13 06:52:34.212671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.281 [2024-07-13 06:52:34.212680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.848 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.848 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:26.848 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.848 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.848 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.848 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.849 [2024-07-13 06:52:34.881803] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75775 has claimed it. 00:06:26.849 2024/07/13 06:52:34 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:26.849 request: 00:06:26.849 { 00:06:26.849 "method": "framework_enable_cpumask_locks", 00:06:26.849 "params": {} 00:06:26.849 } 00:06:26.849 Got JSON-RPC error response 00:06:26.849 GoRPCClient: error on JSON-RPC call 00:06:26.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75775 /var/tmp/spdk.sock 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75775 ']' 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.849 06:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75805 /var/tmp/spdk2.sock 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75805 ']' 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.107 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.366 ************************************ 00:06:27.366 END TEST locking_overlapped_coremask_via_rpc 00:06:27.366 ************************************ 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.366 00:06:27.366 real 0m2.571s 00:06:27.366 user 0m1.256s 00:06:27.366 sys 0m0.215s 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.366 06:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.625 06:52:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.625 06:52:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75775 ]] 00:06:27.625 06:52:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75775 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75775 ']' 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75775 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75775 00:06:27.625 killing process with pid 75775 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75775' 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 75775 00:06:27.625 06:52:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 75775 00:06:28.193 06:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75805 ]] 00:06:28.193 06:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75805 00:06:28.193 06:52:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75805 ']' 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75805 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75805 00:06:28.194 killing process with pid 75805 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75805' 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 75805 00:06:28.194 06:52:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 75805 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75775 ]] 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75775 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75775 ']' 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75775 00:06:28.761 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75775) - No such process 00:06:28.761 Process with pid 75775 is not found 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 75775 is not found' 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75805 ]] 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75805 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75805 ']' 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75805 00:06:28.761 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75805) - No such process 00:06:28.761 Process with pid 75805 is not found 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 75805 is not found' 00:06:28.761 06:52:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.761 00:06:28.761 real 0m21.916s 00:06:28.761 user 0m37.628s 00:06:28.761 sys 0m6.258s 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.761 ************************************ 00:06:28.761 06:52:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.761 END TEST cpu_locks 00:06:28.761 ************************************ 00:06:28.761 06:52:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.761 00:06:28.761 real 0m49.460s 00:06:28.761 user 1m34.583s 00:06:28.761 sys 0m10.187s 00:06:28.761 06:52:36 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.761 06:52:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.761 ************************************ 00:06:28.761 END TEST event 00:06:28.761 ************************************ 00:06:28.761 06:52:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.761 06:52:36 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:28.761 06:52:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.761 06:52:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.761 06:52:36 -- common/autotest_common.sh@10 -- # set +x 00:06:28.761 ************************************ 00:06:28.761 START TEST thread 00:06:28.761 ************************************ 00:06:28.761 06:52:36 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:28.761 * Looking for test storage... 00:06:28.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:28.761 06:52:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.761 06:52:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:28.761 06:52:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.761 06:52:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.761 ************************************ 00:06:28.761 START TEST thread_poller_perf 00:06:28.761 ************************************ 00:06:28.761 06:52:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.761 [2024-07-13 06:52:36.793929] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:28.761 [2024-07-13 06:52:36.794022] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:06:29.020 [2024-07-13 06:52:36.933782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.021 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.021 [2024-07-13 06:52:37.038875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.397 ====================================== 00:06:30.397 busy:2207468300 (cyc) 00:06:30.397 total_run_count: 362000 00:06:30.397 tsc_hz: 2200000000 (cyc) 00:06:30.397 ====================================== 00:06:30.397 poller_cost: 6097 (cyc), 2771 (nsec) 00:06:30.397 00:06:30.397 real 0m1.343s 00:06:30.397 user 0m1.167s 00:06:30.397 sys 0m0.070s 00:06:30.397 06:52:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.397 06:52:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.397 ************************************ 00:06:30.397 END TEST thread_poller_perf 00:06:30.397 ************************************ 00:06:30.397 06:52:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:30.397 06:52:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.397 06:52:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:30.397 06:52:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.397 06:52:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.397 ************************************ 00:06:30.397 START TEST thread_poller_perf 00:06:30.397 ************************************ 00:06:30.397 06:52:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:30.397 [2024-07-13 06:52:38.199717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:30.397 [2024-07-13 06:52:38.199854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75987 ] 00:06:30.397 [2024-07-13 06:52:38.329661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.398 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:30.398 [2024-07-13 06:52:38.398803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.775 ====================================== 00:06:31.775 busy:2202213580 (cyc) 00:06:31.775 total_run_count: 5306000 00:06:31.775 tsc_hz: 2200000000 (cyc) 00:06:31.775 ====================================== 00:06:31.775 poller_cost: 415 (cyc), 188 (nsec) 00:06:31.775 00:06:31.775 real 0m1.290s 00:06:31.775 user 0m1.116s 00:06:31.775 sys 0m0.068s 00:06:31.775 06:52:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.775 06:52:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 ************************************ 00:06:31.775 END TEST thread_poller_perf 00:06:31.775 ************************************ 00:06:31.775 06:52:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:31.775 06:52:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.775 00:06:31.775 real 0m2.836s 00:06:31.775 user 0m2.353s 00:06:31.775 sys 0m0.263s 00:06:31.775 06:52:39 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.775 06:52:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 ************************************ 00:06:31.775 END TEST thread 00:06:31.775 ************************************ 00:06:31.775 06:52:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.775 06:52:39 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:31.775 06:52:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.775 06:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.775 06:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 ************************************ 00:06:31.775 START TEST accel 00:06:31.775 ************************************ 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:31.775 * Looking for test storage... 00:06:31.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:31.775 06:52:39 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:31.775 06:52:39 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:31.775 06:52:39 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.775 06:52:39 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76066 00:06:31.775 06:52:39 accel -- accel/accel.sh@63 -- # waitforlisten 76066 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@829 -- # '[' -z 76066 ']' 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.775 06:52:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.775 06:52:39 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:31.775 06:52:39 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:31.775 06:52:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.775 06:52:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.775 06:52:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.775 06:52:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.775 06:52:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.775 06:52:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:31.775 06:52:39 accel -- accel/accel.sh@41 -- # jq -r . 00:06:31.775 [2024-07-13 06:52:39.740780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:31.775 [2024-07-13 06:52:39.740887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76066 ] 00:06:32.035 [2024-07-13 06:52:39.881109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.035 [2024-07-13 06:52:39.971721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.983 06:52:40 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.983 06:52:40 accel -- common/autotest_common.sh@862 -- # return 0 00:06:32.983 06:52:40 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:32.983 06:52:40 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:32.983 06:52:40 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:32.983 06:52:40 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:32.983 06:52:40 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:32.983 06:52:40 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:32.983 06:52:40 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.983 06:52:40 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:32.983 06:52:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 06:52:40 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.983 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.983 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.983 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.983 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.983 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.983 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.983 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.983 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.983 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.983 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # IFS== 00:06:32.984 06:52:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:32.984 06:52:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.984 06:52:40 accel -- accel/accel.sh@75 -- # killprocess 76066 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@948 -- # '[' -z 76066 ']' 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@952 -- # kill -0 76066 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@953 -- # uname 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76066 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.984 killing process with pid 76066 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76066' 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@967 -- # kill 76066 00:06:32.984 06:52:40 accel -- common/autotest_common.sh@972 -- # wait 76066 00:06:33.256 06:52:41 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:33.256 06:52:41 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:33.256 06:52:41 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.256 06:52:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.256 06:52:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.515 06:52:41 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:33.515 06:52:41 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:33.515 06:52:41 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.515 06:52:41 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:33.515 06:52:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.515 06:52:41 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:33.515 06:52:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.515 06:52:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.515 06:52:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.515 ************************************ 00:06:33.515 START TEST accel_missing_filename 00:06:33.515 ************************************ 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.515 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:33.515 06:52:41 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:33.515 [2024-07-13 06:52:41.435476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:33.515 [2024-07-13 06:52:41.435622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76137 ] 00:06:33.515 [2024-07-13 06:52:41.565829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.773 [2024-07-13 06:52:41.633330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.773 [2024-07-13 06:52:41.705240] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.773 [2024-07-13 06:52:41.814426] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:34.033 A filename is required. 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.033 00:06:34.033 real 0m0.507s 00:06:34.033 user 0m0.317s 00:06:34.033 sys 0m0.136s 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.033 06:52:41 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:34.033 ************************************ 00:06:34.033 END TEST accel_missing_filename 00:06:34.033 ************************************ 00:06:34.033 06:52:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.033 06:52:41 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.033 06:52:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:34.033 06:52:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.033 06:52:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.033 ************************************ 00:06:34.033 START TEST accel_compress_verify 00:06:34.033 ************************************ 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.033 06:52:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:34.033 06:52:41 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:34.033 [2024-07-13 06:52:42.002095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:34.033 [2024-07-13 06:52:42.002190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76161 ] 00:06:34.292 [2024-07-13 06:52:42.141441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.292 [2024-07-13 06:52:42.208381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.292 [2024-07-13 06:52:42.281872] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.551 [2024-07-13 06:52:42.388665] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:34.551 00:06:34.551 Compression does not support the verify option, aborting. 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.551 00:06:34.551 real 0m0.531s 00:06:34.551 user 0m0.325s 00:06:34.551 sys 0m0.143s 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.551 06:52:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:34.551 ************************************ 00:06:34.551 END TEST accel_compress_verify 00:06:34.551 ************************************ 00:06:34.551 06:52:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.551 06:52:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:34.551 06:52:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.551 06:52:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.551 06:52:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.551 ************************************ 00:06:34.551 START TEST accel_wrong_workload 00:06:34.551 ************************************ 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.551 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:34.551 06:52:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:34.551 Unsupported workload type: foobar 00:06:34.551 [2024-07-13 06:52:42.585427] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:34.551 accel_perf options: 00:06:34.551 [-h help message] 00:06:34.551 [-q queue depth per core] 00:06:34.551 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.551 [-T number of threads per core 00:06:34.552 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.552 [-t time in seconds] 00:06:34.552 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.552 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:34.552 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.552 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.552 [-S for crc32c workload, use this seed value (default 0) 00:06:34.552 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.552 [-f for fill workload, use this BYTE value (default 255) 00:06:34.552 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.552 [-y verify result if this switch is on] 00:06:34.552 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.552 Can be used to spread operations across a wider range of memory. 00:06:34.552 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:34.552 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.552 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.552 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.552 00:06:34.552 real 0m0.028s 00:06:34.552 user 0m0.014s 00:06:34.552 sys 0m0.014s 00:06:34.552 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.552 06:52:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:34.552 ************************************ 00:06:34.552 END TEST accel_wrong_workload 00:06:34.552 ************************************ 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.811 06:52:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.811 ************************************ 00:06:34.811 START TEST accel_negative_buffers 00:06:34.811 ************************************ 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:34.811 06:52:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:34.811 -x option must be non-negative. 00:06:34.811 [2024-07-13 06:52:42.661422] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:34.811 accel_perf options: 00:06:34.811 [-h help message] 00:06:34.811 [-q queue depth per core] 00:06:34.811 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.811 [-T number of threads per core 00:06:34.811 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.811 [-t time in seconds] 00:06:34.811 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.811 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:34.811 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.811 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.811 [-S for crc32c workload, use this seed value (default 0) 00:06:34.811 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.811 [-f for fill workload, use this BYTE value (default 255) 00:06:34.811 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.811 [-y verify result if this switch is on] 00:06:34.811 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.811 Can be used to spread operations across a wider range of memory. 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.811 00:06:34.811 real 0m0.028s 00:06:34.811 user 0m0.016s 00:06:34.811 sys 0m0.013s 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.811 06:52:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:34.811 ************************************ 00:06:34.811 END TEST accel_negative_buffers 00:06:34.811 ************************************ 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.811 06:52:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.811 06:52:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.811 ************************************ 00:06:34.811 START TEST accel_crc32c 00:06:34.811 ************************************ 00:06:34.811 06:52:42 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:34.811 06:52:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:34.811 06:52:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:34.811 06:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.811 06:52:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:34.812 06:52:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:34.812 [2024-07-13 06:52:42.737431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:34.812 [2024-07-13 06:52:42.737499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76220 ] 00:06:34.812 [2024-07-13 06:52:42.863839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.071 [2024-07-13 06:52:42.947617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.071 06:52:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:36.447 06:52:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.447 00:06:36.447 real 0m1.532s 00:06:36.447 user 0m1.300s 00:06:36.447 sys 0m0.142s 00:06:36.447 ************************************ 00:06:36.447 END TEST accel_crc32c 00:06:36.447 ************************************ 00:06:36.447 06:52:44 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.447 06:52:44 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:36.447 06:52:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.447 06:52:44 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:36.447 06:52:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:36.447 06:52:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.447 06:52:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.447 ************************************ 00:06:36.447 START TEST accel_crc32c_C2 00:06:36.447 ************************************ 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.447 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:36.447 [2024-07-13 06:52:44.338452] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:36.447 [2024-07-13 06:52:44.338581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76254 ] 00:06:36.447 [2024-07-13 06:52:44.469831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.707 [2024-07-13 06:52:44.537248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.707 06:52:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 ************************************ 00:06:38.084 END TEST accel_crc32c_C2 00:06:38.084 ************************************ 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.084 00:06:38.084 real 0m1.538s 00:06:38.084 user 0m0.016s 00:06:38.084 sys 0m0.003s 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.084 06:52:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:38.084 06:52:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.084 06:52:45 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:38.084 06:52:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.084 06:52:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.084 06:52:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.084 ************************************ 00:06:38.084 START TEST accel_copy 00:06:38.084 ************************************ 00:06:38.084 06:52:45 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:38.084 06:52:45 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:38.084 [2024-07-13 06:52:45.933300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:38.084 [2024-07-13 06:52:45.933387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76289 ] 00:06:38.084 [2024-07-13 06:52:46.070243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.343 [2024-07-13 06:52:46.191732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.343 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.344 06:52:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 ************************************ 00:06:39.722 END TEST accel_copy 00:06:39.722 ************************************ 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:39.722 06:52:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.722 00:06:39.722 real 0m1.548s 00:06:39.722 user 0m1.308s 00:06:39.722 sys 0m0.145s 00:06:39.722 06:52:47 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.722 06:52:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.722 06:52:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.722 06:52:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.722 06:52:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:39.722 06:52:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.722 06:52:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.722 ************************************ 00:06:39.722 START TEST accel_fill 00:06:39.722 ************************************ 00:06:39.722 06:52:47 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:39.722 06:52:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:39.722 [2024-07-13 06:52:47.544965] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:39.722 [2024-07-13 06:52:47.545085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76324 ] 00:06:39.722 [2024-07-13 06:52:47.685127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.723 [2024-07-13 06:52:47.762086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.981 06:52:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 ************************************ 00:06:41.368 END TEST accel_fill 00:06:41.368 ************************************ 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:41.368 06:52:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.368 00:06:41.368 real 0m1.505s 00:06:41.368 user 0m1.265s 00:06:41.368 sys 0m0.146s 00:06:41.368 06:52:49 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.368 06:52:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:41.368 06:52:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.368 06:52:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:41.368 06:52:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.368 06:52:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.368 06:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.368 ************************************ 00:06:41.368 START TEST accel_copy_crc32c 00:06:41.368 ************************************ 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:41.368 [2024-07-13 06:52:49.109777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:41.368 [2024-07-13 06:52:49.109875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76362 ] 00:06:41.368 [2024-07-13 06:52:49.248066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.368 [2024-07-13 06:52:49.313121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.368 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.369 06:52:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.743 00:06:42.743 real 0m1.489s 00:06:42.743 user 0m1.249s 00:06:42.743 sys 0m0.143s 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.743 ************************************ 00:06:42.743 END TEST accel_copy_crc32c 00:06:42.743 ************************************ 00:06:42.743 06:52:50 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:42.743 06:52:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.743 06:52:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.743 06:52:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.743 06:52:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.743 06:52:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.743 ************************************ 00:06:42.743 START TEST accel_copy_crc32c_C2 00:06:42.743 ************************************ 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.743 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:42.743 [2024-07-13 06:52:50.651501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:42.743 [2024-07-13 06:52:50.651608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76398 ] 00:06:42.743 [2024-07-13 06:52:50.788261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.002 [2024-07-13 06:52:50.867974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.002 06:52:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.378 00:06:44.378 real 0m1.561s 00:06:44.378 user 0m0.016s 00:06:44.378 sys 0m0.004s 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.378 06:52:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:44.378 ************************************ 00:06:44.378 END TEST accel_copy_crc32c_C2 00:06:44.378 ************************************ 00:06:44.378 06:52:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.378 06:52:52 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:44.378 06:52:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:44.378 06:52:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.378 06:52:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.378 ************************************ 00:06:44.378 START TEST accel_dualcast 00:06:44.378 ************************************ 00:06:44.378 06:52:52 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:44.379 06:52:52 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:44.379 [2024-07-13 06:52:52.262166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:44.379 [2024-07-13 06:52:52.262262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76431 ] 00:06:44.379 [2024-07-13 06:52:52.398924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.636 [2024-07-13 06:52:52.524077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.636 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.637 06:52:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:46.011 06:52:53 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.011 00:06:46.011 real 0m1.552s 00:06:46.012 user 0m1.317s 00:06:46.012 sys 0m0.144s 00:06:46.012 06:52:53 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.012 06:52:53 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:46.012 ************************************ 00:06:46.012 END TEST accel_dualcast 00:06:46.012 ************************************ 00:06:46.012 06:52:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.012 06:52:53 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:46.012 06:52:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:46.012 06:52:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.012 06:52:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.012 ************************************ 00:06:46.012 START TEST accel_compare 00:06:46.012 ************************************ 00:06:46.012 06:52:53 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:46.012 06:52:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:46.012 [2024-07-13 06:52:53.870022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:46.012 [2024-07-13 06:52:53.870107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76467 ] 00:06:46.012 [2024-07-13 06:52:54.010840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.269 [2024-07-13 06:52:54.092054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.269 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.270 06:52:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.644 06:52:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.644 06:52:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:47.645 06:52:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.645 00:06:47.645 real 0m1.462s 00:06:47.645 user 0m1.254s 00:06:47.645 sys 0m0.115s 00:06:47.645 06:52:55 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.645 ************************************ 00:06:47.645 END TEST accel_compare 00:06:47.645 ************************************ 00:06:47.645 06:52:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:47.645 06:52:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.645 06:52:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:47.645 06:52:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:47.645 06:52:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.645 06:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.645 ************************************ 00:06:47.645 START TEST accel_xor 00:06:47.645 ************************************ 00:06:47.645 06:52:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:47.645 [2024-07-13 06:52:55.381718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:47.645 [2024-07-13 06:52:55.381814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76500 ] 00:06:47.645 [2024-07-13 06:52:55.520834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.645 [2024-07-13 06:52:55.590414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.645 06:52:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.021 00:06:49.021 real 0m1.431s 00:06:49.021 user 0m1.237s 00:06:49.021 sys 0m0.104s 00:06:49.021 06:52:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.021 ************************************ 00:06:49.021 END TEST accel_xor 00:06:49.021 ************************************ 00:06:49.021 06:52:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:49.021 06:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.021 06:52:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:49.021 06:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.021 06:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.021 06:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.021 ************************************ 00:06:49.021 START TEST accel_xor 00:06:49.021 ************************************ 00:06:49.021 06:52:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:49.021 06:52:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:49.021 [2024-07-13 06:52:56.868523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.021 [2024-07-13 06:52:56.868645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76536 ] 00:06:49.021 [2024-07-13 06:52:57.005800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.021 [2024-07-13 06:52:57.081850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:49.280 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 06:52:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:50.217 06:52:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.217 00:06:50.217 real 0m1.434s 00:06:50.217 user 0m1.224s 00:06:50.217 sys 0m0.118s 00:06:50.217 ************************************ 00:06:50.217 06:52:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.217 06:52:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:50.217 END TEST accel_xor 00:06:50.217 ************************************ 00:06:50.476 06:52:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.476 06:52:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:50.476 06:52:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:50.476 06:52:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.476 06:52:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.476 ************************************ 00:06:50.476 START TEST accel_dif_verify 00:06:50.476 ************************************ 00:06:50.476 06:52:58 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:50.476 06:52:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:50.476 [2024-07-13 06:52:58.357736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:50.476 [2024-07-13 06:52:58.357821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76565 ] 00:06:50.476 [2024-07-13 06:52:58.493516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.735 [2024-07-13 06:52:58.561805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.735 06:52:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:52.112 06:52:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.112 00:06:52.112 real 0m1.436s 00:06:52.112 user 0m1.242s 00:06:52.112 sys 0m0.102s 00:06:52.112 06:52:59 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.112 06:52:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 ************************************ 00:06:52.112 END TEST accel_dif_verify 00:06:52.112 ************************************ 00:06:52.112 06:52:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.112 06:52:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:52.112 06:52:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:52.112 06:52:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.112 06:52:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 ************************************ 00:06:52.112 START TEST accel_dif_generate 00:06:52.112 ************************************ 00:06:52.112 06:52:59 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:52.112 06:52:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:52.112 [2024-07-13 06:52:59.843204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:52.112 [2024-07-13 06:52:59.843293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76605 ] 00:06:52.112 [2024-07-13 06:52:59.980710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.112 [2024-07-13 06:53:00.054992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.112 06:53:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.488 ************************************ 00:06:53.488 END TEST accel_dif_generate 00:06:53.488 ************************************ 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:53.488 06:53:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.488 00:06:53.488 real 0m1.428s 00:06:53.488 user 0m1.227s 00:06:53.488 sys 0m0.111s 00:06:53.488 06:53:01 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.488 06:53:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:53.488 06:53:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.488 06:53:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:53.488 06:53:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:53.488 06:53:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.488 06:53:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.488 ************************************ 00:06:53.488 START TEST accel_dif_generate_copy 00:06:53.488 ************************************ 00:06:53.488 06:53:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.488 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:53.488 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:53.488 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.488 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:53.489 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:53.489 [2024-07-13 06:53:01.332743] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:53.489 [2024-07-13 06:53:01.332831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76634 ] 00:06:53.489 [2024-07-13 06:53:01.474297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.489 [2024-07-13 06:53:01.552944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.748 06:53:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.748 ************************************ 00:06:54.748 END TEST accel_dif_generate_copy 00:06:54.748 ************************************ 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.748 00:06:54.748 real 0m1.438s 00:06:54.748 user 0m1.222s 00:06:54.748 sys 0m0.118s 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.748 06:53:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.748 06:53:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.748 06:53:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:54.748 06:53:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.748 06:53:02 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:54.748 06:53:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.748 06:53:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.748 ************************************ 00:06:54.748 START TEST accel_comp 00:06:54.748 ************************************ 00:06:54.748 06:53:02 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.748 06:53:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:54.748 06:53:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:54.748 06:53:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.748 06:53:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.748 06:53:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:54.749 06:53:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:55.025 [2024-07-13 06:53:02.827304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.025 [2024-07-13 06:53:02.827408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76668 ] 00:06:55.025 [2024-07-13 06:53:02.964417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.025 [2024-07-13 06:53:03.021643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.025 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.026 06:53:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:56.398 06:53:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.398 00:06:56.398 real 0m1.426s 00:06:56.398 user 0m1.218s 00:06:56.398 sys 0m0.114s 00:06:56.398 06:53:04 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.398 ************************************ 00:06:56.398 END TEST accel_comp 00:06:56.398 ************************************ 00:06:56.398 06:53:04 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:56.398 06:53:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.398 06:53:04 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.398 06:53:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.398 06:53:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.398 06:53:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.398 ************************************ 00:06:56.398 START TEST accel_decomp 00:06:56.398 ************************************ 00:06:56.398 06:53:04 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:56.398 06:53:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:56.398 [2024-07-13 06:53:04.306965] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.398 [2024-07-13 06:53:04.307056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76703 ] 00:06:56.398 [2024-07-13 06:53:04.444591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.656 [2024-07-13 06:53:04.519564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.656 06:53:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.049 06:53:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.049 ************************************ 00:06:58.049 END TEST accel_decomp 00:06:58.049 ************************************ 00:06:58.049 00:06:58.049 real 0m1.447s 00:06:58.049 user 0m1.236s 00:06:58.049 sys 0m0.119s 00:06:58.049 06:53:05 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.049 06:53:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:58.049 06:53:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.049 06:53:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.049 06:53:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:58.049 06:53:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.049 06:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.049 ************************************ 00:06:58.049 START TEST accel_decomp_full 00:06:58.049 ************************************ 00:06:58.049 06:53:05 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:58.049 06:53:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:58.049 [2024-07-13 06:53:05.814801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:58.049 [2024-07-13 06:53:05.814888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76732 ] 00:06:58.049 [2024-07-13 06:53:05.962858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.049 [2024-07-13 06:53:06.050014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.049 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:58.307 06:53:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.240 06:53:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.240 ************************************ 00:06:59.241 END TEST accel_decomp_full 00:06:59.241 ************************************ 00:06:59.241 06:53:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.241 00:06:59.241 real 0m1.481s 00:06:59.241 user 0m1.261s 00:06:59.241 sys 0m0.127s 00:06:59.241 06:53:07 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.241 06:53:07 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:59.499 06:53:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.499 06:53:07 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.499 06:53:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:59.499 06:53:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.499 06:53:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.499 ************************************ 00:06:59.499 START TEST accel_decomp_mcore 00:06:59.499 ************************************ 00:06:59.499 06:53:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.499 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:59.499 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:59.499 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.499 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.499 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:59.500 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:59.500 [2024-07-13 06:53:07.349811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.500 [2024-07-13 06:53:07.349899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76772 ] 00:06:59.500 [2024-07-13 06:53:07.495563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.772 [2024-07-13 06:53:07.579867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.772 [2024-07-13 06:53:07.579997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.772 [2024-07-13 06:53:07.580145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.772 [2024-07-13 06:53:07.580147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.772 06:53:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 ************************************ 00:07:01.146 END TEST accel_decomp_mcore 00:07:01.146 ************************************ 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.146 00:07:01.146 real 0m1.481s 00:07:01.146 user 0m4.635s 00:07:01.146 sys 0m0.142s 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.146 06:53:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:01.146 06:53:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.146 06:53:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.146 06:53:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:01.146 06:53:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.146 06:53:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.146 ************************************ 00:07:01.146 START TEST accel_decomp_full_mcore 00:07:01.146 ************************************ 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:01.146 06:53:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:01.146 [2024-07-13 06:53:08.880405] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:01.146 [2024-07-13 06:53:08.880479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76804 ] 00:07:01.146 [2024-07-13 06:53:09.016436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.146 [2024-07-13 06:53:09.098460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.146 [2024-07-13 06:53:09.098570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.146 [2024-07-13 06:53:09.098701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.146 [2024-07-13 06:53:09.098704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.146 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:01.147 06:53:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.520 00:07:02.520 real 0m1.452s 00:07:02.520 user 0m0.008s 00:07:02.520 sys 0m0.005s 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.520 ************************************ 00:07:02.520 END TEST accel_decomp_full_mcore 00:07:02.520 ************************************ 00:07:02.520 06:53:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:02.520 06:53:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.520 06:53:10 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.520 06:53:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.520 06:53:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.520 06:53:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.520 ************************************ 00:07:02.520 START TEST accel_decomp_mthread 00:07:02.520 ************************************ 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:02.520 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:02.520 [2024-07-13 06:53:10.382731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.520 [2024-07-13 06:53:10.382821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76847 ] 00:07:02.520 [2024-07-13 06:53:10.520362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.520 [2024-07-13 06:53:10.577098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.777 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.778 06:53:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.152 00:07:04.152 real 0m1.448s 00:07:04.152 user 0m1.242s 00:07:04.152 sys 0m0.111s 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.152 06:53:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:04.152 ************************************ 00:07:04.152 END TEST accel_decomp_mthread 00:07:04.152 ************************************ 00:07:04.152 06:53:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.152 06:53:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.152 06:53:11 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:04.152 06:53:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.152 06:53:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.152 ************************************ 00:07:04.152 START TEST accel_decomp_full_mthread 00:07:04.152 ************************************ 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:04.152 06:53:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:04.152 [2024-07-13 06:53:11.885381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:04.152 [2024-07-13 06:53:11.885477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76876 ] 00:07:04.152 [2024-07-13 06:53:12.022047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.152 [2024-07-13 06:53:12.090055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.152 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:04.153 06:53:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.527 00:07:05.527 real 0m1.471s 00:07:05.527 user 0m1.263s 00:07:05.527 sys 0m0.114s 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.527 06:53:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.527 ************************************ 00:07:05.527 END TEST accel_decomp_full_mthread 00:07:05.527 ************************************ 00:07:05.527 06:53:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.527 06:53:13 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:05.527 06:53:13 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:05.527 06:53:13 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:05.527 06:53:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.527 06:53:13 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:05.527 06:53:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.527 06:53:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.527 06:53:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.527 06:53:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.527 06:53:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.527 06:53:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.527 06:53:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:05.527 06:53:13 accel -- accel/accel.sh@41 -- # jq -r . 00:07:05.527 ************************************ 00:07:05.527 START TEST accel_dif_functional_tests 00:07:05.527 ************************************ 00:07:05.527 06:53:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:05.527 [2024-07-13 06:53:13.445327] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:05.527 [2024-07-13 06:53:13.445419] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76917 ] 00:07:05.527 [2024-07-13 06:53:13.584991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.785 [2024-07-13 06:53:13.646145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.785 [2024-07-13 06:53:13.646295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.785 [2024-07-13 06:53:13.646300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.785 00:07:05.785 00:07:05.785 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.785 http://cunit.sourceforge.net/ 00:07:05.785 00:07:05.785 00:07:05.785 Suite: accel_dif 00:07:05.786 Test: verify: DIF generated, GUARD check ...passed 00:07:05.786 Test: verify: DIF generated, APPTAG check ...passed 00:07:05.786 Test: verify: DIF generated, REFTAG check ...passed 00:07:05.786 Test: verify: DIF not generated, GUARD check ...passed 00:07:05.786 Test: verify: DIF not generated, APPTAG check ...passed 00:07:05.786 Test: verify: DIF not generated, REFTAG check ...passed 00:07:05.786 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:05.786 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:05.786 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:05.786 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:05.786 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:05.786 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:05.786 Test: verify copy: DIF generated, GUARD check ...passed 00:07:05.786 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:05.786 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:05.786 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:05.786 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 06:53:13.731777] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.786 [2024-07-13 06:53:13.731877] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.786 [2024-07-13 06:53:13.731922] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.786 [2024-07-13 06:53:13.731988] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:05.786 [2024-07-13 06:53:13.732126] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:05.786 [2024-07-13 06:53:13.732298] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.786 passed 00:07:05.786 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 06:53:13.732339] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.786 [2024-07-13 06:53:13.732373] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.786 passed 00:07:05.786 Test: generate copy: DIF generated, GUARD check ...passed 00:07:05.786 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:05.786 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:05.786 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:05.786 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:05.786 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:05.786 Test: generate copy: iovecs-len validate ...passed 00:07:05.786 Test: generate copy: buffer alignment validate ...passed 00:07:05.786 00:07:05.786 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.786 suites 1 1 n/a 0 0 00:07:05.786 tests 26 26 26 0 0 00:07:05.786 asserts 115 115 115 0 n/a 00:07:05.786 00:07:05.786 Elapsed time = 0.003 seconds 00:07:05.786 [2024-07-13 06:53:13.732646] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:06.043 00:07:06.044 real 0m0.544s 00:07:06.044 user 0m0.723s 00:07:06.044 sys 0m0.157s 00:07:06.044 06:53:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.044 06:53:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 ************************************ 00:07:06.044 END TEST accel_dif_functional_tests 00:07:06.044 ************************************ 00:07:06.044 06:53:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.044 ************************************ 00:07:06.044 END TEST accel 00:07:06.044 ************************************ 00:07:06.044 00:07:06.044 real 0m34.398s 00:07:06.044 user 0m35.718s 00:07:06.044 sys 0m4.345s 00:07:06.044 06:53:13 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.044 06:53:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 06:53:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.044 06:53:14 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:06.044 06:53:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.044 06:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.044 06:53:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 ************************************ 00:07:06.044 START TEST accel_rpc 00:07:06.044 ************************************ 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:06.044 * Looking for test storage... 00:07:06.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:06.044 06:53:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.044 06:53:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=76981 00:07:06.044 06:53:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:06.044 06:53:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 76981 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 76981 ']' 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.044 06:53:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.302 [2024-07-13 06:53:14.169794] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:06.302 [2024-07-13 06:53:14.169900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76981 ] 00:07:06.302 [2024-07-13 06:53:14.309007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.561 [2024-07-13 06:53:14.402410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.495 06:53:15 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.495 06:53:15 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:07.495 06:53:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:07.495 06:53:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:07.495 06:53:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:07.495 06:53:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:07.495 06:53:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:07.495 06:53:15 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.495 06:53:15 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.495 06:53:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.495 ************************************ 00:07:07.495 START TEST accel_assign_opcode 00:07:07.495 ************************************ 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:07.495 [2024-07-13 06:53:15.267057] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:07.495 [2024-07-13 06:53:15.275068] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.495 software 00:07:07.495 00:07:07.495 real 0m0.299s 00:07:07.495 user 0m0.062s 00:07:07.495 sys 0m0.008s 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.495 ************************************ 00:07:07.495 END TEST accel_assign_opcode 00:07:07.495 ************************************ 00:07:07.495 06:53:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:07.754 06:53:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 76981 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 76981 ']' 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 76981 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76981 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.754 killing process with pid 76981 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76981' 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@967 -- # kill 76981 00:07:07.754 06:53:15 accel_rpc -- common/autotest_common.sh@972 -- # wait 76981 00:07:08.012 00:07:08.012 real 0m1.994s 00:07:08.012 user 0m2.180s 00:07:08.012 sys 0m0.467s 00:07:08.012 06:53:16 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.012 ************************************ 00:07:08.012 END TEST accel_rpc 00:07:08.012 ************************************ 00:07:08.012 06:53:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.012 06:53:16 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.012 06:53:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:08.012 06:53:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.012 06:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.012 06:53:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.012 ************************************ 00:07:08.012 START TEST app_cmdline 00:07:08.012 ************************************ 00:07:08.012 06:53:16 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:08.271 * Looking for test storage... 00:07:08.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:08.271 06:53:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.271 06:53:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77092 00:07:08.271 06:53:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77092 00:07:08.271 06:53:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.271 06:53:16 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 77092 ']' 00:07:08.271 06:53:16 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.271 06:53:16 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.271 06:53:16 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.271 06:53:16 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.271 06:53:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.271 [2024-07-13 06:53:16.216389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:08.271 [2024-07-13 06:53:16.216505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77092 ] 00:07:08.531 [2024-07-13 06:53:16.358486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.531 [2024-07-13 06:53:16.461952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.468 06:53:17 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.468 06:53:17 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:09.468 06:53:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:09.468 { 00:07:09.468 "fields": { 00:07:09.468 "commit": "719d03c6a", 00:07:09.468 "major": 24, 00:07:09.469 "minor": 9, 00:07:09.469 "patch": 0, 00:07:09.469 "suffix": "-pre" 00:07:09.469 }, 00:07:09.469 "version": "SPDK v24.09-pre git sha1 719d03c6a" 00:07:09.469 } 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.469 06:53:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:09.469 06:53:17 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.728 2024/07/13 06:53:17 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:09.728 request: 00:07:09.728 { 00:07:09.728 "method": "env_dpdk_get_mem_stats", 00:07:09.728 "params": {} 00:07:09.728 } 00:07:09.728 Got JSON-RPC error response 00:07:09.728 GoRPCClient: error on JSON-RPC call 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.987 06:53:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77092 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 77092 ']' 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 77092 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77092 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77092' 00:07:09.987 killing process with pid 77092 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@967 -- # kill 77092 00:07:09.987 06:53:17 app_cmdline -- common/autotest_common.sh@972 -- # wait 77092 00:07:10.246 00:07:10.246 real 0m2.155s 00:07:10.246 user 0m2.670s 00:07:10.246 sys 0m0.537s 00:07:10.246 06:53:18 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.246 06:53:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.246 ************************************ 00:07:10.246 END TEST app_cmdline 00:07:10.246 ************************************ 00:07:10.246 06:53:18 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.246 06:53:18 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:10.246 06:53:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.246 06:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.246 06:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:10.246 ************************************ 00:07:10.246 START TEST version 00:07:10.246 ************************************ 00:07:10.246 06:53:18 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:10.506 * Looking for test storage... 00:07:10.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:10.506 06:53:18 version -- app/version.sh@17 -- # get_header_version major 00:07:10.506 06:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # cut -f2 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.506 06:53:18 version -- app/version.sh@17 -- # major=24 00:07:10.506 06:53:18 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.506 06:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # cut -f2 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.506 06:53:18 version -- app/version.sh@18 -- # minor=9 00:07:10.506 06:53:18 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.506 06:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # cut -f2 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.506 06:53:18 version -- app/version.sh@19 -- # patch=0 00:07:10.506 06:53:18 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # cut -f2 00:07:10.506 06:53:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:10.506 06:53:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.506 06:53:18 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.506 06:53:18 version -- app/version.sh@22 -- # version=24.9 00:07:10.506 06:53:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.506 06:53:18 version -- app/version.sh@28 -- # version=24.9rc0 00:07:10.506 06:53:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:10.506 06:53:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.506 06:53:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:10.506 06:53:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:10.506 00:07:10.506 real 0m0.139s 00:07:10.506 user 0m0.089s 00:07:10.506 sys 0m0.084s 00:07:10.506 06:53:18 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.506 06:53:18 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.506 ************************************ 00:07:10.506 END TEST version 00:07:10.506 ************************************ 00:07:10.506 06:53:18 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.506 06:53:18 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@198 -- # uname -s 00:07:10.506 06:53:18 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:10.506 06:53:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.506 06:53:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.506 06:53:18 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:10.506 06:53:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.506 06:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:10.506 06:53:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:10.506 06:53:18 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:10.506 06:53:18 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.506 06:53:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.506 06:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.506 06:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:10.506 ************************************ 00:07:10.506 START TEST nvmf_tcp 00:07:10.506 ************************************ 00:07:10.506 06:53:18 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.506 * Looking for test storage... 00:07:10.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.767 06:53:18 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.767 06:53:18 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.767 06:53:18 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.767 06:53:18 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:10.767 06:53:18 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:10.767 06:53:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.767 06:53:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:10.767 06:53:18 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.767 06:53:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.767 06:53:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.767 06:53:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.767 ************************************ 00:07:10.767 START TEST nvmf_example 00:07:10.767 ************************************ 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.767 * Looking for test storage... 00:07:10.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.767 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:10.768 Cannot find device "nvmf_init_br" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:10.768 Cannot find device "nvmf_tgt_br" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.768 Cannot find device "nvmf_tgt_br2" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:10.768 Cannot find device "nvmf_init_br" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:10.768 Cannot find device "nvmf_tgt_br" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:10.768 Cannot find device "nvmf_tgt_br2" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:10.768 Cannot find device "nvmf_br" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:10.768 Cannot find device "nvmf_init_if" 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:07:10.768 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:11.027 06:53:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:11.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:07:11.027 00:07:11.027 --- 10.0.0.2 ping statistics --- 00:07:11.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.027 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:11.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:11.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:07:11.027 00:07:11.027 --- 10.0.0.3 ping statistics --- 00:07:11.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.027 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:11.027 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:11.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:11.027 00:07:11.027 --- 10.0.0.1 ping statistics --- 00:07:11.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.027 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=77456 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 77456 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 77456 ']' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.287 06:53:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.223 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.481 06:53:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.481 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:12.482 06:53:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:22.477 Initializing NVMe Controllers 00:07:22.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:22.477 Initialization complete. Launching workers. 00:07:22.477 ======================================================== 00:07:22.477 Latency(us) 00:07:22.477 Device Information : IOPS MiB/s Average min max 00:07:22.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15159.91 59.22 4221.51 707.21 22930.55 00:07:22.477 ======================================================== 00:07:22.477 Total : 15159.91 59.22 4221.51 707.21 22930.55 00:07:22.477 00:07:22.477 06:53:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:22.477 06:53:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:22.477 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.477 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.735 rmmod nvme_tcp 00:07:22.735 rmmod nvme_fabrics 00:07:22.735 rmmod nvme_keyring 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 77456 ']' 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 77456 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 77456 ']' 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 77456 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77456 00:07:22.735 killing process with pid 77456 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77456' 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 77456 00:07:22.735 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 77456 00:07:22.993 nvmf threads initialize successfully 00:07:22.993 bdev subsystem init successfully 00:07:22.993 created a nvmf target service 00:07:22.993 create targets's poll groups done 00:07:22.993 all subsystems of target started 00:07:22.993 nvmf target is running 00:07:22.993 all subsystems of target stopped 00:07:22.993 destroy targets's poll groups done 00:07:22.993 destroyed the nvmf target service 00:07:22.993 bdev subsystem finish successfully 00:07:22.993 nvmf threads destroy successfully 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.993 06:53:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.993 00:07:22.993 real 0m12.393s 00:07:22.993 user 0m44.513s 00:07:22.993 sys 0m2.043s 00:07:22.993 06:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.993 06:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.993 ************************************ 00:07:22.993 END TEST nvmf_example 00:07:22.993 ************************************ 00:07:22.993 06:53:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.993 06:53:31 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.993 06:53:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.993 06:53:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.993 06:53:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.254 ************************************ 00:07:23.254 START TEST nvmf_filesystem 00:07:23.254 ************************************ 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.255 * Looking for test storage... 00:07:23.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:23.255 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:23.255 #define SPDK_CONFIG_H 00:07:23.255 #define SPDK_CONFIG_APPS 1 00:07:23.255 #define SPDK_CONFIG_ARCH native 00:07:23.255 #undef SPDK_CONFIG_ASAN 00:07:23.255 #define SPDK_CONFIG_AVAHI 1 00:07:23.255 #undef SPDK_CONFIG_CET 00:07:23.255 #define SPDK_CONFIG_COVERAGE 1 00:07:23.255 #define SPDK_CONFIG_CROSS_PREFIX 00:07:23.255 #undef SPDK_CONFIG_CRYPTO 00:07:23.255 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:23.255 #undef SPDK_CONFIG_CUSTOMOCF 00:07:23.255 #undef SPDK_CONFIG_DAOS 00:07:23.255 #define SPDK_CONFIG_DAOS_DIR 00:07:23.255 #define SPDK_CONFIG_DEBUG 1 00:07:23.255 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:23.255 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:23.255 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:23.255 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:23.255 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:23.255 #undef SPDK_CONFIG_DPDK_UADK 00:07:23.255 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:23.255 #define SPDK_CONFIG_EXAMPLES 1 00:07:23.255 #undef SPDK_CONFIG_FC 00:07:23.255 #define SPDK_CONFIG_FC_PATH 00:07:23.255 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:23.255 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:23.255 #undef SPDK_CONFIG_FUSE 00:07:23.255 #undef SPDK_CONFIG_FUZZER 00:07:23.256 #define SPDK_CONFIG_FUZZER_LIB 00:07:23.256 #define SPDK_CONFIG_GOLANG 1 00:07:23.256 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:23.256 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:23.256 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:23.256 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:23.256 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:23.256 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:23.256 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:23.256 #define SPDK_CONFIG_IDXD 1 00:07:23.256 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:23.256 #undef SPDK_CONFIG_IPSEC_MB 00:07:23.256 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:23.256 #define SPDK_CONFIG_ISAL 1 00:07:23.256 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:23.256 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:23.256 #define SPDK_CONFIG_LIBDIR 00:07:23.256 #undef SPDK_CONFIG_LTO 00:07:23.256 #define SPDK_CONFIG_MAX_LCORES 128 00:07:23.256 #define SPDK_CONFIG_NVME_CUSE 1 00:07:23.256 #undef SPDK_CONFIG_OCF 00:07:23.256 #define SPDK_CONFIG_OCF_PATH 00:07:23.256 #define SPDK_CONFIG_OPENSSL_PATH 00:07:23.256 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:23.256 #define SPDK_CONFIG_PGO_DIR 00:07:23.256 #undef SPDK_CONFIG_PGO_USE 00:07:23.256 #define SPDK_CONFIG_PREFIX /usr/local 00:07:23.256 #undef SPDK_CONFIG_RAID5F 00:07:23.256 #undef SPDK_CONFIG_RBD 00:07:23.256 #define SPDK_CONFIG_RDMA 1 00:07:23.256 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:23.256 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:23.256 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:23.256 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:23.256 #define SPDK_CONFIG_SHARED 1 00:07:23.256 #undef SPDK_CONFIG_SMA 00:07:23.256 #define SPDK_CONFIG_TESTS 1 00:07:23.256 #undef SPDK_CONFIG_TSAN 00:07:23.256 #define SPDK_CONFIG_UBLK 1 00:07:23.256 #define SPDK_CONFIG_UBSAN 1 00:07:23.256 #undef SPDK_CONFIG_UNIT_TESTS 00:07:23.256 #undef SPDK_CONFIG_URING 00:07:23.256 #define SPDK_CONFIG_URING_PATH 00:07:23.256 #undef SPDK_CONFIG_URING_ZNS 00:07:23.256 #define SPDK_CONFIG_USDT 1 00:07:23.256 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:23.256 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:23.256 #undef SPDK_CONFIG_VFIO_USER 00:07:23.256 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:23.256 #define SPDK_CONFIG_VHOST 1 00:07:23.256 #define SPDK_CONFIG_VIRTIO 1 00:07:23.256 #undef SPDK_CONFIG_VTUNE 00:07:23.256 #define SPDK_CONFIG_VTUNE_DIR 00:07:23.256 #define SPDK_CONFIG_WERROR 1 00:07:23.256 #define SPDK_CONFIG_WPDK_DIR 00:07:23.256 #undef SPDK_CONFIG_XNVME 00:07:23.256 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:23.256 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:23.257 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 77698 ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 77698 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.wq6pG4 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.wq6pG4/tests/target /tmp/spdk.wq6pG4 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13060464640 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5985869824 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13060464640 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5985869824 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267744256 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=95475101696 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4227678208 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:23.258 * Looking for test storage... 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13060464640 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:23.258 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:23.259 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:23.518 Cannot find device "nvmf_tgt_br" 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:23.518 Cannot find device "nvmf_tgt_br2" 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:23.518 Cannot find device "nvmf_tgt_br" 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:23.518 Cannot find device "nvmf_tgt_br2" 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:23.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:23.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:23.518 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:23.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:07:23.776 00:07:23.776 --- 10.0.0.2 ping statistics --- 00:07:23.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.776 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:23.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:23.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:07:23.776 00:07:23.776 --- 10.0.0.3 ping statistics --- 00:07:23.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.776 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:23.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:23.776 00:07:23.776 --- 10.0.0.1 ping statistics --- 00:07:23.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.776 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:23.776 ************************************ 00:07:23.776 START TEST nvmf_filesystem_no_in_capsule 00:07:23.776 ************************************ 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77866 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77866 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 77866 ']' 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.776 06:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.776 [2024-07-13 06:53:31.716840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:23.776 [2024-07-13 06:53:31.716948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.033 [2024-07-13 06:53:31.861079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.033 [2024-07-13 06:53:31.964021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.033 [2024-07-13 06:53:31.964086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.033 [2024-07-13 06:53:31.964101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.033 [2024-07-13 06:53:31.964111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.033 [2024-07-13 06:53:31.964121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.033 [2024-07-13 06:53:31.964284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.033 [2024-07-13 06:53:31.964581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.033 [2024-07-13 06:53:31.965298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.033 [2024-07-13 06:53:31.965373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.971 [2024-07-13 06:53:32.781435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.971 06:53:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.971 Malloc1 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.971 [2024-07-13 06:53:33.024825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.971 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:25.228 { 00:07:25.228 "aliases": [ 00:07:25.228 "ad06bdfb-b317-4dd0-8035-e0f24ce5217b" 00:07:25.228 ], 00:07:25.228 "assigned_rate_limits": { 00:07:25.228 "r_mbytes_per_sec": 0, 00:07:25.228 "rw_ios_per_sec": 0, 00:07:25.228 "rw_mbytes_per_sec": 0, 00:07:25.228 "w_mbytes_per_sec": 0 00:07:25.228 }, 00:07:25.228 "block_size": 512, 00:07:25.228 "claim_type": "exclusive_write", 00:07:25.228 "claimed": true, 00:07:25.228 "driver_specific": {}, 00:07:25.228 "memory_domains": [ 00:07:25.228 { 00:07:25.228 "dma_device_id": "system", 00:07:25.228 "dma_device_type": 1 00:07:25.228 }, 00:07:25.228 { 00:07:25.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.228 "dma_device_type": 2 00:07:25.228 } 00:07:25.228 ], 00:07:25.228 "name": "Malloc1", 00:07:25.228 "num_blocks": 1048576, 00:07:25.228 "product_name": "Malloc disk", 00:07:25.228 "supported_io_types": { 00:07:25.228 "abort": true, 00:07:25.228 "compare": false, 00:07:25.228 "compare_and_write": false, 00:07:25.228 "copy": true, 00:07:25.228 "flush": true, 00:07:25.228 "get_zone_info": false, 00:07:25.228 "nvme_admin": false, 00:07:25.228 "nvme_io": false, 00:07:25.228 "nvme_io_md": false, 00:07:25.228 "nvme_iov_md": false, 00:07:25.228 "read": true, 00:07:25.228 "reset": true, 00:07:25.228 "seek_data": false, 00:07:25.228 "seek_hole": false, 00:07:25.228 "unmap": true, 00:07:25.228 "write": true, 00:07:25.228 "write_zeroes": true, 00:07:25.228 "zcopy": true, 00:07:25.228 "zone_append": false, 00:07:25.228 "zone_management": false 00:07:25.228 }, 00:07:25.228 "uuid": "ad06bdfb-b317-4dd0-8035-e0f24ce5217b", 00:07:25.228 "zoned": false 00:07:25.228 } 00:07:25.228 ]' 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:25.228 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.485 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.485 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:25.485 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.485 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:25.485 06:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.383 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:27.641 06:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.580 ************************************ 00:07:28.580 START TEST filesystem_ext4 00:07:28.580 ************************************ 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.580 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.580 Discarding device blocks: 0/522240 done 00:07:28.580 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.580 Filesystem UUID: 8e426578-5c21-4d26-b609-a3c0d036d285 00:07:28.580 Superblock backups stored on blocks: 00:07:28.580 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.580 00:07:28.580 Allocating group tables: 0/64 done 00:07:28.580 Writing inode tables: 0/64 done 00:07:28.580 Creating journal (8192 blocks): done 00:07:28.580 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.580 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:28.580 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 77866 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.837 ************************************ 00:07:28.837 END TEST filesystem_ext4 00:07:28.837 ************************************ 00:07:28.837 00:07:28.837 real 0m0.391s 00:07:28.837 user 0m0.021s 00:07:28.837 sys 0m0.061s 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.837 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 ************************************ 00:07:29.096 START TEST filesystem_btrfs 00:07:29.096 ************************************ 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:29.096 06:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.096 btrfs-progs v6.6.2 00:07:29.096 See https://btrfs.readthedocs.io for more information. 00:07:29.096 00:07:29.096 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.096 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.096 this does not affect your deployments: 00:07:29.096 - DUP for metadata (-m dup) 00:07:29.096 - enabled no-holes (-O no-holes) 00:07:29.096 - enabled free-space-tree (-R free-space-tree) 00:07:29.096 00:07:29.096 Label: (null) 00:07:29.096 UUID: 4130db59-084c-4216-a0b7-2c048992eccd 00:07:29.096 Node size: 16384 00:07:29.096 Sector size: 4096 00:07:29.096 Filesystem size: 510.00MiB 00:07:29.096 Block group profiles: 00:07:29.096 Data: single 8.00MiB 00:07:29.096 Metadata: DUP 32.00MiB 00:07:29.096 System: DUP 8.00MiB 00:07:29.096 SSD detected: yes 00:07:29.096 Zoned device: no 00:07:29.096 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.096 Runtime features: free-space-tree 00:07:29.096 Checksum: crc32c 00:07:29.096 Number of devices: 1 00:07:29.096 Devices: 00:07:29.096 ID SIZE PATH 00:07:29.096 1 510.00MiB /dev/nvme0n1p1 00:07:29.096 00:07:29.096 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:29.096 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.096 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.096 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:29.096 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 77866 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.354 ************************************ 00:07:29.354 END TEST filesystem_btrfs 00:07:29.354 ************************************ 00:07:29.354 00:07:29.354 real 0m0.278s 00:07:29.354 user 0m0.020s 00:07:29.354 sys 0m0.069s 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.354 ************************************ 00:07:29.354 START TEST filesystem_xfs 00:07:29.354 ************************************ 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:29.354 06:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.354 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.354 = sectsz=512 attr=2, projid32bit=1 00:07:29.354 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.354 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.354 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.354 = sunit=0 swidth=0 blks 00:07:29.354 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.354 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.354 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.354 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.347 Discarding blocks...Done. 00:07:30.347 06:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:30.347 06:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 77866 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.879 ************************************ 00:07:32.879 END TEST filesystem_xfs 00:07:32.879 ************************************ 00:07:32.879 00:07:32.879 real 0m3.171s 00:07:32.879 user 0m0.021s 00:07:32.879 sys 0m0.064s 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 77866 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 77866 ']' 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 77866 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.879 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77866 00:07:32.879 killing process with pid 77866 00:07:32.880 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.880 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.880 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77866' 00:07:32.880 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 77866 00:07:32.880 06:53:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 77866 00:07:33.138 ************************************ 00:07:33.138 END TEST nvmf_filesystem_no_in_capsule 00:07:33.138 ************************************ 00:07:33.138 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:33.138 00:07:33.138 real 0m9.530s 00:07:33.138 user 0m36.103s 00:07:33.138 sys 0m1.490s 00:07:33.138 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.138 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 ************************************ 00:07:33.397 START TEST nvmf_filesystem_in_capsule 00:07:33.397 ************************************ 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78177 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78177 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 78177 ']' 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.397 06:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.397 [2024-07-13 06:53:41.304160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:33.397 [2024-07-13 06:53:41.304260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.397 [2024-07-13 06:53:41.445755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.656 [2024-07-13 06:53:41.563719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.656 [2024-07-13 06:53:41.564101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.656 [2024-07-13 06:53:41.564242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.656 [2024-07-13 06:53:41.564390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.656 [2024-07-13 06:53:41.564425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.656 [2024-07-13 06:53:41.564869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.656 [2024-07-13 06:53:41.564969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.656 [2024-07-13 06:53:41.565088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.656 [2024-07-13 06:53:41.565083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 [2024-07-13 06:53:42.368392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.592 [2024-07-13 06:53:42.617373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:34.592 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.593 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.593 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.593 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:34.593 { 00:07:34.593 "aliases": [ 00:07:34.593 "43cbe296-3645-49bd-96c2-68a36f1b0a78" 00:07:34.593 ], 00:07:34.593 "assigned_rate_limits": { 00:07:34.593 "r_mbytes_per_sec": 0, 00:07:34.593 "rw_ios_per_sec": 0, 00:07:34.593 "rw_mbytes_per_sec": 0, 00:07:34.593 "w_mbytes_per_sec": 0 00:07:34.593 }, 00:07:34.593 "block_size": 512, 00:07:34.593 "claim_type": "exclusive_write", 00:07:34.593 "claimed": true, 00:07:34.593 "driver_specific": {}, 00:07:34.593 "memory_domains": [ 00:07:34.593 { 00:07:34.593 "dma_device_id": "system", 00:07:34.593 "dma_device_type": 1 00:07:34.593 }, 00:07:34.593 { 00:07:34.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.593 "dma_device_type": 2 00:07:34.593 } 00:07:34.593 ], 00:07:34.593 "name": "Malloc1", 00:07:34.593 "num_blocks": 1048576, 00:07:34.593 "product_name": "Malloc disk", 00:07:34.593 "supported_io_types": { 00:07:34.593 "abort": true, 00:07:34.593 "compare": false, 00:07:34.593 "compare_and_write": false, 00:07:34.593 "copy": true, 00:07:34.593 "flush": true, 00:07:34.593 "get_zone_info": false, 00:07:34.593 "nvme_admin": false, 00:07:34.593 "nvme_io": false, 00:07:34.593 "nvme_io_md": false, 00:07:34.593 "nvme_iov_md": false, 00:07:34.593 "read": true, 00:07:34.593 "reset": true, 00:07:34.593 "seek_data": false, 00:07:34.593 "seek_hole": false, 00:07:34.593 "unmap": true, 00:07:34.593 "write": true, 00:07:34.593 "write_zeroes": true, 00:07:34.593 "zcopy": true, 00:07:34.593 "zone_append": false, 00:07:34.593 "zone_management": false 00:07:34.593 }, 00:07:34.593 "uuid": "43cbe296-3645-49bd-96c2-68a36f1b0a78", 00:07:34.593 "zoned": false 00:07:34.593 } 00:07:34.593 ]' 00:07:34.593 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:34.873 06:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:37.402 06:53:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:37.402 06:53:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.337 ************************************ 00:07:38.337 START TEST filesystem_in_capsule_ext4 00:07:38.337 ************************************ 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:38.337 mke2fs 1.46.5 (30-Dec-2021) 00:07:38.337 Discarding device blocks: 0/522240 done 00:07:38.337 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:38.337 Filesystem UUID: 2b84ca02-5034-4440-ac97-5e6668a572d4 00:07:38.337 Superblock backups stored on blocks: 00:07:38.337 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:38.337 00:07:38.337 Allocating group tables: 0/64 done 00:07:38.337 Writing inode tables: 0/64 done 00:07:38.337 Creating journal (8192 blocks): done 00:07:38.337 Writing superblocks and filesystem accounting information: 0/64 done 00:07:38.337 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.337 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 78177 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.596 00:07:38.596 real 0m0.450s 00:07:38.596 user 0m0.019s 00:07:38.596 sys 0m0.065s 00:07:38.596 ************************************ 00:07:38.596 END TEST filesystem_in_capsule_ext4 00:07:38.596 ************************************ 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.596 ************************************ 00:07:38.596 START TEST filesystem_in_capsule_btrfs 00:07:38.596 ************************************ 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.596 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.855 btrfs-progs v6.6.2 00:07:38.855 See https://btrfs.readthedocs.io for more information. 00:07:38.855 00:07:38.855 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.855 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.855 this does not affect your deployments: 00:07:38.855 - DUP for metadata (-m dup) 00:07:38.855 - enabled no-holes (-O no-holes) 00:07:38.855 - enabled free-space-tree (-R free-space-tree) 00:07:38.855 00:07:38.855 Label: (null) 00:07:38.855 UUID: 0f2d69d3-dffd-4155-b34d-23dd44b01252 00:07:38.855 Node size: 16384 00:07:38.855 Sector size: 4096 00:07:38.855 Filesystem size: 510.00MiB 00:07:38.855 Block group profiles: 00:07:38.855 Data: single 8.00MiB 00:07:38.855 Metadata: DUP 32.00MiB 00:07:38.855 System: DUP 8.00MiB 00:07:38.855 SSD detected: yes 00:07:38.855 Zoned device: no 00:07:38.855 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.855 Runtime features: free-space-tree 00:07:38.855 Checksum: crc32c 00:07:38.855 Number of devices: 1 00:07:38.855 Devices: 00:07:38.855 ID SIZE PATH 00:07:38.855 1 510.00MiB /dev/nvme0n1p1 00:07:38.855 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 78177 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.855 ************************************ 00:07:38.855 END TEST filesystem_in_capsule_btrfs 00:07:38.855 ************************************ 00:07:38.855 00:07:38.855 real 0m0.328s 00:07:38.855 user 0m0.022s 00:07:38.855 sys 0m0.065s 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.855 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.113 ************************************ 00:07:39.113 START TEST filesystem_in_capsule_xfs 00:07:39.113 ************************************ 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:39.113 06:53:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.113 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.113 = sectsz=512 attr=2, projid32bit=1 00:07:39.113 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.113 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.113 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.113 = sunit=0 swidth=0 blks 00:07:39.113 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.113 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.113 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.113 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.678 Discarding blocks...Done. 00:07:39.678 06:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.678 06:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 78177 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.576 ************************************ 00:07:41.576 END TEST filesystem_in_capsule_xfs 00:07:41.576 ************************************ 00:07:41.576 00:07:41.576 real 0m2.663s 00:07:41.576 user 0m0.024s 00:07:41.576 sys 0m0.053s 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.576 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 78177 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 78177 ']' 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 78177 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78177 00:07:41.835 killing process with pid 78177 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78177' 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 78177 00:07:41.835 06:53:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 78177 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.402 00:07:42.402 real 0m9.128s 00:07:42.402 user 0m34.470s 00:07:42.402 sys 0m1.483s 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.402 ************************************ 00:07:42.402 END TEST nvmf_filesystem_in_capsule 00:07:42.402 ************************************ 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.402 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.402 rmmod nvme_tcp 00:07:42.402 rmmod nvme_fabrics 00:07:42.662 rmmod nvme_keyring 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:42.662 00:07:42.662 real 0m19.477s 00:07:42.662 user 1m10.837s 00:07:42.662 sys 0m3.349s 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.662 06:53:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.662 ************************************ 00:07:42.662 END TEST nvmf_filesystem 00:07:42.662 ************************************ 00:07:42.662 06:53:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:42.662 06:53:50 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:42.662 06:53:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.662 06:53:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.662 06:53:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.662 ************************************ 00:07:42.662 START TEST nvmf_target_discovery 00:07:42.662 ************************************ 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:42.662 * Looking for test storage... 00:07:42.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:42.662 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:42.922 Cannot find device "nvmf_tgt_br" 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.922 Cannot find device "nvmf_tgt_br2" 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:42.922 Cannot find device "nvmf_tgt_br" 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:42.922 Cannot find device "nvmf_tgt_br2" 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:42.922 06:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:43.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:43.181 00:07:43.181 --- 10.0.0.2 ping statistics --- 00:07:43.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.181 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:43.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:43.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:07:43.181 00:07:43.181 --- 10.0.0.3 ping statistics --- 00:07:43.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.181 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:43.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:43.181 00:07:43.181 --- 10.0.0.1 ping statistics --- 00:07:43.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.181 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=78634 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 78634 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 78634 ']' 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.181 06:53:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.181 [2024-07-13 06:53:51.152130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:43.182 [2024-07-13 06:53:51.152229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.440 [2024-07-13 06:53:51.295423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.440 [2024-07-13 06:53:51.425826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.440 [2024-07-13 06:53:51.425886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.440 [2024-07-13 06:53:51.425896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.440 [2024-07-13 06:53:51.425904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.440 [2024-07-13 06:53:51.425911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.440 [2024-07-13 06:53:51.426090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.440 [2024-07-13 06:53:51.426335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.440 [2024-07-13 06:53:51.426763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.440 [2024-07-13 06:53:51.426771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 [2024-07-13 06:53:52.189630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 Null1 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 [2024-07-13 06:53:52.244291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 Null2 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 Null3 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 Null4 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:44.378 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 4420 00:07:44.379 00:07:44.379 Discovery Log Number of Records 6, Generation counter 6 00:07:44.379 =====Discovery Log Entry 0====== 00:07:44.379 trtype: tcp 00:07:44.379 adrfam: ipv4 00:07:44.379 subtype: current discovery subsystem 00:07:44.379 treq: not required 00:07:44.379 portid: 0 00:07:44.379 trsvcid: 4420 00:07:44.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:44.379 traddr: 10.0.0.2 00:07:44.379 eflags: explicit discovery connections, duplicate discovery information 00:07:44.379 sectype: none 00:07:44.379 =====Discovery Log Entry 1====== 00:07:44.379 trtype: tcp 00:07:44.379 adrfam: ipv4 00:07:44.379 subtype: nvme subsystem 00:07:44.379 treq: not required 00:07:44.379 portid: 0 00:07:44.379 trsvcid: 4420 00:07:44.379 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:44.379 traddr: 10.0.0.2 00:07:44.379 eflags: none 00:07:44.379 sectype: none 00:07:44.379 =====Discovery Log Entry 2====== 00:07:44.379 trtype: tcp 00:07:44.379 adrfam: ipv4 00:07:44.379 subtype: nvme subsystem 00:07:44.379 treq: not required 00:07:44.379 portid: 0 00:07:44.379 trsvcid: 4420 00:07:44.379 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:44.379 traddr: 10.0.0.2 00:07:44.379 eflags: none 00:07:44.379 sectype: none 00:07:44.379 =====Discovery Log Entry 3====== 00:07:44.379 trtype: tcp 00:07:44.379 adrfam: ipv4 00:07:44.379 subtype: nvme subsystem 00:07:44.379 treq: not required 00:07:44.379 portid: 0 00:07:44.379 trsvcid: 4420 00:07:44.379 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:44.379 traddr: 10.0.0.2 00:07:44.379 eflags: none 00:07:44.379 sectype: none 00:07:44.379 =====Discovery Log Entry 4====== 00:07:44.379 trtype: tcp 00:07:44.379 adrfam: ipv4 00:07:44.379 subtype: nvme subsystem 00:07:44.379 treq: not required 00:07:44.379 portid: 0 00:07:44.379 trsvcid: 4420 00:07:44.379 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:44.379 traddr: 10.0.0.2 00:07:44.379 eflags: none 00:07:44.379 sectype: none 00:07:44.379 =====Discovery Log Entry 5====== 00:07:44.379 trtype: tcp 00:07:44.379 adrfam: ipv4 00:07:44.379 subtype: discovery subsystem referral 00:07:44.379 treq: not required 00:07:44.379 portid: 0 00:07:44.379 trsvcid: 4430 00:07:44.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:44.379 traddr: 10.0.0.2 00:07:44.379 eflags: none 00:07:44.379 sectype: none 00:07:44.379 Perform nvmf subsystem discovery via RPC 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.379 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.379 [ 00:07:44.379 { 00:07:44.379 "allow_any_host": true, 00:07:44.379 "hosts": [], 00:07:44.379 "listen_addresses": [ 00:07:44.379 { 00:07:44.379 "adrfam": "IPv4", 00:07:44.379 "traddr": "10.0.0.2", 00:07:44.379 "trsvcid": "4420", 00:07:44.379 "trtype": "TCP" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:44.379 "subtype": "Discovery" 00:07:44.379 }, 00:07:44.379 { 00:07:44.379 "allow_any_host": true, 00:07:44.379 "hosts": [], 00:07:44.379 "listen_addresses": [ 00:07:44.379 { 00:07:44.379 "adrfam": "IPv4", 00:07:44.379 "traddr": "10.0.0.2", 00:07:44.379 "trsvcid": "4420", 00:07:44.379 "trtype": "TCP" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "max_cntlid": 65519, 00:07:44.379 "max_namespaces": 32, 00:07:44.379 "min_cntlid": 1, 00:07:44.379 "model_number": "SPDK bdev Controller", 00:07:44.379 "namespaces": [ 00:07:44.379 { 00:07:44.379 "bdev_name": "Null1", 00:07:44.379 "name": "Null1", 00:07:44.379 "nguid": "44F62EFAFBBD4B35B4BAA9DCC5524395", 00:07:44.379 "nsid": 1, 00:07:44.379 "uuid": "44f62efa-fbbd-4b35-b4ba-a9dcc5524395" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:44.379 "serial_number": "SPDK00000000000001", 00:07:44.379 "subtype": "NVMe" 00:07:44.379 }, 00:07:44.379 { 00:07:44.379 "allow_any_host": true, 00:07:44.379 "hosts": [], 00:07:44.379 "listen_addresses": [ 00:07:44.379 { 00:07:44.379 "adrfam": "IPv4", 00:07:44.379 "traddr": "10.0.0.2", 00:07:44.379 "trsvcid": "4420", 00:07:44.379 "trtype": "TCP" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "max_cntlid": 65519, 00:07:44.379 "max_namespaces": 32, 00:07:44.379 "min_cntlid": 1, 00:07:44.379 "model_number": "SPDK bdev Controller", 00:07:44.379 "namespaces": [ 00:07:44.379 { 00:07:44.379 "bdev_name": "Null2", 00:07:44.379 "name": "Null2", 00:07:44.379 "nguid": "A8BEF1F047A74F4EA4A622194171061B", 00:07:44.379 "nsid": 1, 00:07:44.379 "uuid": "a8bef1f0-47a7-4f4e-a4a6-22194171061b" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:44.379 "serial_number": "SPDK00000000000002", 00:07:44.379 "subtype": "NVMe" 00:07:44.379 }, 00:07:44.379 { 00:07:44.379 "allow_any_host": true, 00:07:44.379 "hosts": [], 00:07:44.379 "listen_addresses": [ 00:07:44.379 { 00:07:44.379 "adrfam": "IPv4", 00:07:44.379 "traddr": "10.0.0.2", 00:07:44.379 "trsvcid": "4420", 00:07:44.379 "trtype": "TCP" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "max_cntlid": 65519, 00:07:44.379 "max_namespaces": 32, 00:07:44.379 "min_cntlid": 1, 00:07:44.379 "model_number": "SPDK bdev Controller", 00:07:44.379 "namespaces": [ 00:07:44.379 { 00:07:44.379 "bdev_name": "Null3", 00:07:44.379 "name": "Null3", 00:07:44.379 "nguid": "D484C2282F894C8FB3BFCE12D2D4570A", 00:07:44.379 "nsid": 1, 00:07:44.379 "uuid": "d484c228-2f89-4c8f-b3bf-ce12d2d4570a" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:44.379 "serial_number": "SPDK00000000000003", 00:07:44.379 "subtype": "NVMe" 00:07:44.379 }, 00:07:44.379 { 00:07:44.379 "allow_any_host": true, 00:07:44.379 "hosts": [], 00:07:44.379 "listen_addresses": [ 00:07:44.379 { 00:07:44.379 "adrfam": "IPv4", 00:07:44.379 "traddr": "10.0.0.2", 00:07:44.379 "trsvcid": "4420", 00:07:44.379 "trtype": "TCP" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "max_cntlid": 65519, 00:07:44.379 "max_namespaces": 32, 00:07:44.379 "min_cntlid": 1, 00:07:44.379 "model_number": "SPDK bdev Controller", 00:07:44.379 "namespaces": [ 00:07:44.379 { 00:07:44.379 "bdev_name": "Null4", 00:07:44.379 "name": "Null4", 00:07:44.379 "nguid": "EC19B18D7C2443AC93E7CC2C8048E953", 00:07:44.379 "nsid": 1, 00:07:44.379 "uuid": "ec19b18d-7c24-43ac-93e7-cc2c8048e953" 00:07:44.379 } 00:07:44.379 ], 00:07:44.379 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:44.379 "serial_number": "SPDK00000000000004", 00:07:44.379 "subtype": "NVMe" 00:07:44.640 } 00:07:44.640 ] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.640 rmmod nvme_tcp 00:07:44.640 rmmod nvme_fabrics 00:07:44.640 rmmod nvme_keyring 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 78634 ']' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 78634 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 78634 ']' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 78634 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78634 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.640 killing process with pid 78634 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78634' 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 78634 00:07:44.640 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 78634 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.902 06:53:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.164 06:53:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:45.164 00:07:45.164 real 0m2.413s 00:07:45.164 user 0m6.368s 00:07:45.164 sys 0m0.675s 00:07:45.164 06:53:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.164 06:53:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:45.164 ************************************ 00:07:45.164 END TEST nvmf_target_discovery 00:07:45.164 ************************************ 00:07:45.164 06:53:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.164 06:53:53 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.164 06:53:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.164 06:53:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.164 06:53:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.164 ************************************ 00:07:45.164 START TEST nvmf_referrals 00:07:45.164 ************************************ 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:45.164 * Looking for test storage... 00:07:45.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.164 Cannot find device "nvmf_tgt_br" 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.164 Cannot find device "nvmf_tgt_br2" 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.164 Cannot find device "nvmf_tgt_br" 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.164 Cannot find device "nvmf_tgt_br2" 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:45.164 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:45.422 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:45.679 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:45.679 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:45.679 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:45.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:45.679 00:07:45.679 --- 10.0.0.2 ping statistics --- 00:07:45.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.679 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:45.679 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:45.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:45.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:45.679 00:07:45.680 --- 10.0.0.3 ping statistics --- 00:07:45.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.680 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:45.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:45.680 00:07:45.680 --- 10.0.0.1 ping statistics --- 00:07:45.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.680 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=78863 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 78863 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 78863 ']' 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.680 06:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.680 [2024-07-13 06:53:53.624829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:45.680 [2024-07-13 06:53:53.625497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.937 [2024-07-13 06:53:53.773162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.937 [2024-07-13 06:53:53.882051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.937 [2024-07-13 06:53:53.882120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.937 [2024-07-13 06:53:53.882135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.937 [2024-07-13 06:53:53.882146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.937 [2024-07-13 06:53:53.882155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.937 [2024-07-13 06:53:53.882310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.937 [2024-07-13 06:53:53.883018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.937 [2024-07-13 06:53:53.883191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.937 [2024-07-13 06:53:53.883196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 [2024-07-13 06:53:54.631045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 [2024-07-13 06:53:54.650884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.870 06:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.129 06:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:47.129 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.387 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:47.387 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:47.387 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:47.387 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:47.387 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.387 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.388 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.644 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.900 rmmod nvme_tcp 00:07:47.900 rmmod nvme_fabrics 00:07:47.900 rmmod nvme_keyring 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 78863 ']' 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 78863 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 78863 ']' 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 78863 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78863 00:07:47.900 killing process with pid 78863 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78863' 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 78863 00:07:47.900 06:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 78863 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:48.158 ************************************ 00:07:48.158 END TEST nvmf_referrals 00:07:48.158 ************************************ 00:07:48.158 00:07:48.158 real 0m3.156s 00:07:48.158 user 0m9.991s 00:07:48.158 sys 0m0.902s 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.158 06:53:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.417 06:53:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.417 06:53:56 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.417 06:53:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.417 06:53:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.417 06:53:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.417 ************************************ 00:07:48.417 START TEST nvmf_connect_disconnect 00:07:48.417 ************************************ 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:48.417 * Looking for test storage... 00:07:48.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.417 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:48.418 Cannot find device "nvmf_tgt_br" 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.418 Cannot find device "nvmf_tgt_br2" 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:48.418 Cannot find device "nvmf_tgt_br" 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:48.418 Cannot find device "nvmf_tgt_br2" 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:48.418 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:48.676 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:48.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:07:48.934 00:07:48.934 --- 10.0.0.2 ping statistics --- 00:07:48.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.934 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:48.934 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:48.934 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:48.934 00:07:48.934 --- 10.0.0.3 ping statistics --- 00:07:48.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.934 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:48.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:07:48.934 00:07:48.934 --- 10.0.0.1 ping statistics --- 00:07:48.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.934 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=79168 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 79168 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 79168 ']' 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.934 06:53:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.934 [2024-07-13 06:53:56.848019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:48.934 [2024-07-13 06:53:56.848130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.934 [2024-07-13 06:53:56.987248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.192 [2024-07-13 06:53:57.107509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.192 [2024-07-13 06:53:57.107940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.192 [2024-07-13 06:53:57.108075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.192 [2024-07-13 06:53:57.108197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.192 [2024-07-13 06:53:57.108239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.192 [2024-07-13 06:53:57.108503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.192 [2024-07-13 06:53:57.108607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.192 [2024-07-13 06:53:57.108718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.192 [2024-07-13 06:53:57.108722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 [2024-07-13 06:53:57.918593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.125 06:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 [2024-07-13 06:53:57.997961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.125 06:53:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.125 06:53:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:50.125 06:53:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:50.125 06:53:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:50.125 06:53:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:52.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.665 rmmod nvme_tcp 00:11:34.665 rmmod nvme_fabrics 00:11:34.665 rmmod nvme_keyring 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 79168 ']' 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 79168 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 79168 ']' 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 79168 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79168 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.665 killing process with pid 79168 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79168' 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 79168 00:11:34.665 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 79168 00:11:34.923 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.181 06:57:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.181 06:57:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:35.181 ************************************ 00:11:35.181 END TEST nvmf_connect_disconnect 00:11:35.181 ************************************ 00:11:35.181 00:11:35.181 real 3m46.774s 00:11:35.181 user 14m44.885s 00:11:35.181 sys 0m20.191s 00:11:35.181 06:57:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.181 06:57:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.181 06:57:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:35.181 06:57:43 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.181 06:57:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:35.181 06:57:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.181 06:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.181 ************************************ 00:11:35.181 START TEST nvmf_multitarget 00:11:35.181 ************************************ 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.181 * Looking for test storage... 00:11:35.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:35.181 Cannot find device "nvmf_tgt_br" 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.181 Cannot find device "nvmf_tgt_br2" 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:35.181 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:35.439 Cannot find device "nvmf_tgt_br" 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:35.439 Cannot find device "nvmf_tgt_br2" 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:35.439 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:35.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:35.697 00:11:35.697 --- 10.0.0.2 ping statistics --- 00:11:35.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.697 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:35.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:35.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:35.697 00:11:35.697 --- 10.0.0.3 ping statistics --- 00:11:35.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.697 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:35.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:35.697 00:11:35.697 --- 10.0.0.1 ping statistics --- 00:11:35.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.697 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=82950 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 82950 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 82950 ']' 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.697 06:57:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 [2024-07-13 06:57:43.629941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:11:35.697 [2024-07-13 06:57:43.630016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.955 [2024-07-13 06:57:43.769632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.955 [2024-07-13 06:57:43.889887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.956 [2024-07-13 06:57:43.890234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.956 [2024-07-13 06:57:43.890494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.956 [2024-07-13 06:57:43.890698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.956 [2024-07-13 06:57:43.890925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.956 [2024-07-13 06:57:43.891219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.956 [2024-07-13 06:57:43.891370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.956 [2024-07-13 06:57:43.891455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.956 [2024-07-13 06:57:43.891445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:36.889 06:57:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:37.146 "nvmf_tgt_1" 00:11:37.146 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:37.146 "nvmf_tgt_2" 00:11:37.146 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:37.146 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:37.405 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:37.405 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:37.405 true 00:11:37.405 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:37.665 true 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.665 rmmod nvme_tcp 00:11:37.665 rmmod nvme_fabrics 00:11:37.665 rmmod nvme_keyring 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 82950 ']' 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 82950 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 82950 ']' 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 82950 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82950 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:37.665 killing process with pid 82950 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82950' 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 82950 00:11:37.665 06:57:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 82950 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:38.232 ************************************ 00:11:38.232 END TEST nvmf_multitarget 00:11:38.232 ************************************ 00:11:38.232 00:11:38.232 real 0m3.010s 00:11:38.232 user 0m9.704s 00:11:38.232 sys 0m0.773s 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.232 06:57:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:38.232 06:57:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:38.232 06:57:46 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:38.232 06:57:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:38.232 06:57:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.232 06:57:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.232 ************************************ 00:11:38.232 START TEST nvmf_rpc 00:11:38.232 ************************************ 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:38.232 * Looking for test storage... 00:11:38.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:38.232 Cannot find device "nvmf_tgt_br" 00:11:38.232 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:11:38.233 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:38.491 Cannot find device "nvmf_tgt_br2" 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:38.491 Cannot find device "nvmf_tgt_br" 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:38.491 Cannot find device "nvmf_tgt_br2" 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:38.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:38.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:38.491 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:38.492 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:38.492 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:38.492 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:38.492 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:38.750 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:38.750 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:38.750 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:38.750 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:38.750 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:38.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:11:38.750 00:11:38.750 --- 10.0.0.2 ping statistics --- 00:11:38.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.751 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:38.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:38.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:11:38.751 00:11:38.751 --- 10.0.0.3 ping statistics --- 00:11:38.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.751 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:38.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:38.751 00:11:38.751 --- 10.0.0.1 ping statistics --- 00:11:38.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.751 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=83185 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 83185 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 83185 ']' 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.751 06:57:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.751 [2024-07-13 06:57:46.715092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:11:38.751 [2024-07-13 06:57:46.715197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.010 [2024-07-13 06:57:46.859523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.010 [2024-07-13 06:57:46.963913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.010 [2024-07-13 06:57:46.963969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.010 [2024-07-13 06:57:46.963978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.010 [2024-07-13 06:57:46.963986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.010 [2024-07-13 06:57:46.963992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.010 [2024-07-13 06:57:46.964178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.010 [2024-07-13 06:57:46.964480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.010 [2024-07-13 06:57:46.964971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.010 [2024-07-13 06:57:46.964993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:39.946 "poll_groups": [ 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_000", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [] 00:11:39.946 }, 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_001", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [] 00:11:39.946 }, 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_002", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [] 00:11:39.946 }, 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_003", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [] 00:11:39.946 } 00:11:39.946 ], 00:11:39.946 "tick_rate": 2200000000 00:11:39.946 }' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 [2024-07-13 06:57:47.896137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:39.946 "poll_groups": [ 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_000", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [ 00:11:39.946 { 00:11:39.946 "trtype": "TCP" 00:11:39.946 } 00:11:39.946 ] 00:11:39.946 }, 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_001", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [ 00:11:39.946 { 00:11:39.946 "trtype": "TCP" 00:11:39.946 } 00:11:39.946 ] 00:11:39.946 }, 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_002", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [ 00:11:39.946 { 00:11:39.946 "trtype": "TCP" 00:11:39.946 } 00:11:39.946 ] 00:11:39.946 }, 00:11:39.946 { 00:11:39.946 "admin_qpairs": 0, 00:11:39.946 "completed_nvme_io": 0, 00:11:39.946 "current_admin_qpairs": 0, 00:11:39.946 "current_io_qpairs": 0, 00:11:39.946 "io_qpairs": 0, 00:11:39.946 "name": "nvmf_tgt_poll_group_003", 00:11:39.946 "pending_bdev_io": 0, 00:11:39.946 "transports": [ 00:11:39.946 { 00:11:39.946 "trtype": "TCP" 00:11:39.946 } 00:11:39.946 ] 00:11:39.946 } 00:11:39.946 ], 00:11:39.946 "tick_rate": 2200000000 00:11:39.946 }' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:39.946 06:57:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:39.946 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:39.946 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:39.946 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.206 Malloc1 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.206 [2024-07-13 06:57:48.108476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -a 10.0.0.2 -s 4420 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -a 10.0.0.2 -s 4420 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -a 10.0.0.2 -s 4420 00:11:40.206 [2024-07-13 06:57:48.136845] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd' 00:11:40.206 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:40.206 could not add new controller: failed to write to nvme-fabrics device 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:40.206 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:40.207 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:40.207 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:11:40.207 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.207 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.207 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.207 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.465 06:57:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.465 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:40.465 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.465 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:40.465 06:57:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:42.369 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:42.628 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.629 [2024-07-13 06:57:50.528668] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd' 00:11:42.629 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:42.629 could not add new controller: failed to write to nvme-fabrics device 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.629 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.887 06:57:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.887 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:42.887 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.887 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:42.887 06:57:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:44.789 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:44.789 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:44.789 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.789 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:44.790 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.790 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:44.790 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.047 [2024-07-13 06:57:52.921966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.047 06:57:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.047 06:57:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.047 06:57:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.047 06:57:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.047 06:57:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:45.047 06:57:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.579 [2024-07-13 06:57:55.227126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:47.579 06:57:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.483 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.484 [2024-07-13 06:57:57.527968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.484 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.742 06:57:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.742 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:49.742 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.742 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:49.742 06:57:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 [2024-07-13 06:57:59.832503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.271 06:57:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.271 06:58:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.271 06:58:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.271 06:58:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.271 06:58:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.271 06:58:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.174 [2024-07-13 06:58:02.237631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.174 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.433 06:58:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:56.445 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.703 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 [2024-07-13 06:58:04.558219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 [2024-07-13 06:58:04.606189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 [2024-07-13 06:58:04.654263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 [2024-07-13 06:58:04.702378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.704 [2024-07-13 06:58:04.750442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.704 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.705 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:56.964 "poll_groups": [ 00:11:56.964 { 00:11:56.964 "admin_qpairs": 2, 00:11:56.964 "completed_nvme_io": 165, 00:11:56.964 "current_admin_qpairs": 0, 00:11:56.964 "current_io_qpairs": 0, 00:11:56.964 "io_qpairs": 16, 00:11:56.964 "name": "nvmf_tgt_poll_group_000", 00:11:56.964 "pending_bdev_io": 0, 00:11:56.964 "transports": [ 00:11:56.964 { 00:11:56.964 "trtype": "TCP" 00:11:56.964 } 00:11:56.964 ] 00:11:56.964 }, 00:11:56.964 { 00:11:56.964 "admin_qpairs": 3, 00:11:56.964 "completed_nvme_io": 68, 00:11:56.964 "current_admin_qpairs": 0, 00:11:56.964 "current_io_qpairs": 0, 00:11:56.964 "io_qpairs": 17, 00:11:56.964 "name": "nvmf_tgt_poll_group_001", 00:11:56.964 "pending_bdev_io": 0, 00:11:56.964 "transports": [ 00:11:56.964 { 00:11:56.964 "trtype": "TCP" 00:11:56.964 } 00:11:56.964 ] 00:11:56.964 }, 00:11:56.964 { 00:11:56.964 "admin_qpairs": 1, 00:11:56.964 "completed_nvme_io": 69, 00:11:56.964 "current_admin_qpairs": 0, 00:11:56.964 "current_io_qpairs": 0, 00:11:56.964 "io_qpairs": 19, 00:11:56.964 "name": "nvmf_tgt_poll_group_002", 00:11:56.964 "pending_bdev_io": 0, 00:11:56.964 "transports": [ 00:11:56.964 { 00:11:56.964 "trtype": "TCP" 00:11:56.964 } 00:11:56.964 ] 00:11:56.964 }, 00:11:56.964 { 00:11:56.964 "admin_qpairs": 1, 00:11:56.964 "completed_nvme_io": 118, 00:11:56.964 "current_admin_qpairs": 0, 00:11:56.964 "current_io_qpairs": 0, 00:11:56.964 "io_qpairs": 18, 00:11:56.964 "name": "nvmf_tgt_poll_group_003", 00:11:56.964 "pending_bdev_io": 0, 00:11:56.964 "transports": [ 00:11:56.964 { 00:11:56.964 "trtype": "TCP" 00:11:56.964 } 00:11:56.964 ] 00:11:56.964 } 00:11:56.964 ], 00:11:56.964 "tick_rate": 2200000000 00:11:56.964 }' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.964 06:58:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.964 rmmod nvme_tcp 00:11:56.964 rmmod nvme_fabrics 00:11:56.964 rmmod nvme_keyring 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 83185 ']' 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 83185 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 83185 ']' 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 83185 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.964 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83185 00:11:57.224 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:57.224 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:57.224 killing process with pid 83185 00:11:57.224 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83185' 00:11:57.224 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 83185 00:11:57.224 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 83185 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:57.483 00:11:57.483 real 0m19.252s 00:11:57.483 user 1m12.785s 00:11:57.483 sys 0m2.122s 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.483 06:58:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.483 ************************************ 00:11:57.483 END TEST nvmf_rpc 00:11:57.483 ************************************ 00:11:57.483 06:58:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:57.483 06:58:05 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:57.483 06:58:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:57.483 06:58:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.483 06:58:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:57.483 ************************************ 00:11:57.483 START TEST nvmf_invalid 00:11:57.483 ************************************ 00:11:57.483 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:57.483 * Looking for test storage... 00:11:57.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:11:57.742 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:57.743 Cannot find device "nvmf_tgt_br" 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:57.743 Cannot find device "nvmf_tgt_br2" 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:57.743 Cannot find device "nvmf_tgt_br" 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:57.743 Cannot find device "nvmf_tgt_br2" 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:57.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:57.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.743 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:58.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:11:58.003 00:11:58.003 --- 10.0.0.2 ping statistics --- 00:11:58.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.003 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:58.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:58.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:58.003 00:11:58.003 --- 10.0.0.3 ping statistics --- 00:11:58.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.003 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:58.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:11:58.003 00:11:58.003 --- 10.0.0.1 ping statistics --- 00:11:58.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.003 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=83696 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 83696 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 83696 ']' 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.003 06:58:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:58.003 [2024-07-13 06:58:06.026356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:11:58.003 [2024-07-13 06:58:06.026486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.261 [2024-07-13 06:58:06.166989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.261 [2024-07-13 06:58:06.285380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.261 [2024-07-13 06:58:06.285443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.261 [2024-07-13 06:58:06.285453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.261 [2024-07-13 06:58:06.285461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.261 [2024-07-13 06:58:06.285468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.261 [2024-07-13 06:58:06.285625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.261 [2024-07-13 06:58:06.285777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.261 [2024-07-13 06:58:06.286627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.261 [2024-07-13 06:58:06.286632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:59.196 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17625 00:11:59.196 [2024-07-13 06:58:07.248526] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/13 06:58:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17625 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:59.455 request: 00:11:59.455 { 00:11:59.455 "method": "nvmf_create_subsystem", 00:11:59.455 "params": { 00:11:59.455 "nqn": "nqn.2016-06.io.spdk:cnode17625", 00:11:59.455 "tgt_name": "foobar" 00:11:59.455 } 00:11:59.455 } 00:11:59.455 Got JSON-RPC error response 00:11:59.455 GoRPCClient: error on JSON-RPC call' 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/13 06:58:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17625 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:59.455 request: 00:11:59.455 { 00:11:59.455 "method": "nvmf_create_subsystem", 00:11:59.455 "params": { 00:11:59.455 "nqn": "nqn.2016-06.io.spdk:cnode17625", 00:11:59.455 "tgt_name": "foobar" 00:11:59.455 } 00:11:59.455 } 00:11:59.455 Got JSON-RPC error response 00:11:59.455 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25079 00:11:59.455 [2024-07-13 06:58:07.472834] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25079: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/13 06:58:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25079 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:59.455 request: 00:11:59.455 { 00:11:59.455 "method": "nvmf_create_subsystem", 00:11:59.455 "params": { 00:11:59.455 "nqn": "nqn.2016-06.io.spdk:cnode25079", 00:11:59.455 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:59.455 } 00:11:59.455 } 00:11:59.455 Got JSON-RPC error response 00:11:59.455 GoRPCClient: error on JSON-RPC call' 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/13 06:58:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25079 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:59.455 request: 00:11:59.455 { 00:11:59.455 "method": "nvmf_create_subsystem", 00:11:59.455 "params": { 00:11:59.455 "nqn": "nqn.2016-06.io.spdk:cnode25079", 00:11:59.455 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:59.455 } 00:11:59.455 } 00:11:59.455 Got JSON-RPC error response 00:11:59.455 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:59.455 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17142 00:11:59.713 [2024-07-13 06:58:07.769182] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17142: invalid model number 'SPDK_Controller' 00:11:59.971 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/13 06:58:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17142], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:59.971 request: 00:11:59.971 { 00:11:59.971 "method": "nvmf_create_subsystem", 00:11:59.971 "params": { 00:11:59.971 "nqn": "nqn.2016-06.io.spdk:cnode17142", 00:11:59.971 "model_number": "SPDK_Controller\u001f" 00:11:59.971 } 00:11:59.971 } 00:11:59.971 Got JSON-RPC error response 00:11:59.971 GoRPCClient: error on JSON-RPC call' 00:11:59.971 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/13 06:58:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17142], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:59.971 request: 00:11:59.971 { 00:11:59.971 "method": "nvmf_create_subsystem", 00:11:59.971 "params": { 00:11:59.971 "nqn": "nqn.2016-06.io.spdk:cnode17142", 00:11:59.971 "model_number": "SPDK_Controller\u001f" 00:11:59.971 } 00:11:59.971 } 00:11:59.971 Got JSON-RPC error response 00:11:59.971 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:59.971 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:59.971 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'u;y"U[Fees/Y2kO1.!l7r' 00:11:59.972 06:58:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'u;y"U[Fees/Y2kO1.!l7r' nqn.2016-06.io.spdk:cnode13280 00:12:00.231 [2024-07-13 06:58:08.185733] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13280: invalid serial number 'u;y"U[Fees/Y2kO1.!l7r' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/13 06:58:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13280 serial_number:u;y"U[Fees/Y2kO1.!l7r], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN u;y"U[Fees/Y2kO1.!l7r 00:12:00.231 request: 00:12:00.231 { 00:12:00.231 "method": "nvmf_create_subsystem", 00:12:00.231 "params": { 00:12:00.231 "nqn": "nqn.2016-06.io.spdk:cnode13280", 00:12:00.231 "serial_number": "u;y\"U[Fees/Y2kO1.!l7r" 00:12:00.231 } 00:12:00.231 } 00:12:00.231 Got JSON-RPC error response 00:12:00.231 GoRPCClient: error on JSON-RPC call' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/13 06:58:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13280 serial_number:u;y"U[Fees/Y2kO1.!l7r], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN u;y"U[Fees/Y2kO1.!l7r 00:12:00.231 request: 00:12:00.231 { 00:12:00.231 "method": "nvmf_create_subsystem", 00:12:00.231 "params": { 00:12:00.231 "nqn": "nqn.2016-06.io.spdk:cnode13280", 00:12:00.231 "serial_number": "u;y\"U[Fees/Y2kO1.!l7r" 00:12:00.231 } 00:12:00.231 } 00:12:00.231 Got JSON-RPC error response 00:12:00.231 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.231 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.490 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'\''-N*C' 00:12:00.491 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'\''-N*C' nqn.2016-06.io.spdk:cnode18697 00:12:00.749 [2024-07-13 06:58:08.682325] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18697: invalid model number 'jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'-N*C' 00:12:00.749 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/13 06:58:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'\''-N*C nqn:nqn.2016-06.io.spdk:cnode18697], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'\''-N*C 00:12:00.749 request: 00:12:00.749 { 00:12:00.749 "method": "nvmf_create_subsystem", 00:12:00.749 "params": { 00:12:00.749 "nqn": "nqn.2016-06.io.spdk:cnode18697", 00:12:00.749 "model_number": "jGNpqg[L\u007fx3uqI`K_t]9`\u007fg#zF_v(-u]eQE]'\''-N*C" 00:12:00.749 } 00:12:00.749 } 00:12:00.749 Got JSON-RPC error response 00:12:00.749 GoRPCClient: error on JSON-RPC call' 00:12:00.749 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/13 06:58:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'-N*C nqn:nqn.2016-06.io.spdk:cnode18697], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN jGNpqg[Lx3uqI`K_t]9`g#zF_v(-u]eQE]'-N*C 00:12:00.749 request: 00:12:00.749 { 00:12:00.749 "method": "nvmf_create_subsystem", 00:12:00.749 "params": { 00:12:00.749 "nqn": "nqn.2016-06.io.spdk:cnode18697", 00:12:00.749 "model_number": "jGNpqg[L\u007fx3uqI`K_t]9`\u007fg#zF_v(-u]eQE]'-N*C" 00:12:00.749 } 00:12:00.749 } 00:12:00.749 Got JSON-RPC error response 00:12:00.749 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:00.749 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:01.007 [2024-07-13 06:58:08.946752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.007 06:58:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:01.265 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:01.265 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:01.265 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:01.265 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:01.265 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:01.523 [2024-07-13 06:58:09.491050] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:01.523 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/13 06:58:09 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:01.523 request: 00:12:01.523 { 00:12:01.523 "method": "nvmf_subsystem_remove_listener", 00:12:01.523 "params": { 00:12:01.523 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:01.523 "listen_address": { 00:12:01.523 "trtype": "tcp", 00:12:01.523 "traddr": "", 00:12:01.523 "trsvcid": "4421" 00:12:01.523 } 00:12:01.523 } 00:12:01.523 } 00:12:01.523 Got JSON-RPC error response 00:12:01.523 GoRPCClient: error on JSON-RPC call' 00:12:01.523 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/13 06:58:09 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:01.523 request: 00:12:01.523 { 00:12:01.523 "method": "nvmf_subsystem_remove_listener", 00:12:01.523 "params": { 00:12:01.523 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:01.523 "listen_address": { 00:12:01.523 "trtype": "tcp", 00:12:01.523 "traddr": "", 00:12:01.523 "trsvcid": "4421" 00:12:01.523 } 00:12:01.523 } 00:12:01.523 } 00:12:01.523 Got JSON-RPC error response 00:12:01.523 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:01.524 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13183 -i 0 00:12:01.782 [2024-07-13 06:58:09.723233] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13183: invalid cntlid range [0-65519] 00:12:01.782 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/13 06:58:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13183], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:01.782 request: 00:12:01.782 { 00:12:01.782 "method": "nvmf_create_subsystem", 00:12:01.782 "params": { 00:12:01.782 "nqn": "nqn.2016-06.io.spdk:cnode13183", 00:12:01.782 "min_cntlid": 0 00:12:01.782 } 00:12:01.782 } 00:12:01.782 Got JSON-RPC error response 00:12:01.782 GoRPCClient: error on JSON-RPC call' 00:12:01.782 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/13 06:58:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13183], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:01.782 request: 00:12:01.782 { 00:12:01.782 "method": "nvmf_create_subsystem", 00:12:01.782 "params": { 00:12:01.782 "nqn": "nqn.2016-06.io.spdk:cnode13183", 00:12:01.782 "min_cntlid": 0 00:12:01.782 } 00:12:01.782 } 00:12:01.782 Got JSON-RPC error response 00:12:01.782 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:01.782 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8809 -i 65520 00:12:02.040 [2024-07-13 06:58:09.955489] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8809: invalid cntlid range [65520-65519] 00:12:02.040 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/13 06:58:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode8809], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:02.040 request: 00:12:02.040 { 00:12:02.040 "method": "nvmf_create_subsystem", 00:12:02.040 "params": { 00:12:02.040 "nqn": "nqn.2016-06.io.spdk:cnode8809", 00:12:02.040 "min_cntlid": 65520 00:12:02.040 } 00:12:02.040 } 00:12:02.040 Got JSON-RPC error response 00:12:02.040 GoRPCClient: error on JSON-RPC call' 00:12:02.040 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/13 06:58:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode8809], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:02.040 request: 00:12:02.040 { 00:12:02.040 "method": "nvmf_create_subsystem", 00:12:02.040 "params": { 00:12:02.040 "nqn": "nqn.2016-06.io.spdk:cnode8809", 00:12:02.040 "min_cntlid": 65520 00:12:02.040 } 00:12:02.040 } 00:12:02.040 Got JSON-RPC error response 00:12:02.040 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.040 06:58:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20943 -I 0 00:12:02.298 [2024-07-13 06:58:10.231819] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20943: invalid cntlid range [1-0] 00:12:02.299 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/13 06:58:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20943], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:02.299 request: 00:12:02.299 { 00:12:02.299 "method": "nvmf_create_subsystem", 00:12:02.299 "params": { 00:12:02.299 "nqn": "nqn.2016-06.io.spdk:cnode20943", 00:12:02.299 "max_cntlid": 0 00:12:02.299 } 00:12:02.299 } 00:12:02.299 Got JSON-RPC error response 00:12:02.299 GoRPCClient: error on JSON-RPC call' 00:12:02.299 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/13 06:58:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20943], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:02.299 request: 00:12:02.299 { 00:12:02.299 "method": "nvmf_create_subsystem", 00:12:02.299 "params": { 00:12:02.299 "nqn": "nqn.2016-06.io.spdk:cnode20943", 00:12:02.299 "max_cntlid": 0 00:12:02.299 } 00:12:02.299 } 00:12:02.299 Got JSON-RPC error response 00:12:02.299 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.299 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23832 -I 65520 00:12:02.557 [2024-07-13 06:58:10.464109] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23832: invalid cntlid range [1-65520] 00:12:02.557 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/13 06:58:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23832], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:02.557 request: 00:12:02.557 { 00:12:02.557 "method": "nvmf_create_subsystem", 00:12:02.557 "params": { 00:12:02.557 "nqn": "nqn.2016-06.io.spdk:cnode23832", 00:12:02.557 "max_cntlid": 65520 00:12:02.557 } 00:12:02.557 } 00:12:02.557 Got JSON-RPC error response 00:12:02.557 GoRPCClient: error on JSON-RPC call' 00:12:02.557 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/13 06:58:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23832], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:02.557 request: 00:12:02.557 { 00:12:02.557 "method": "nvmf_create_subsystem", 00:12:02.557 "params": { 00:12:02.557 "nqn": "nqn.2016-06.io.spdk:cnode23832", 00:12:02.557 "max_cntlid": 65520 00:12:02.557 } 00:12:02.557 } 00:12:02.557 Got JSON-RPC error response 00:12:02.557 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.557 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18260 -i 6 -I 5 00:12:02.815 [2024-07-13 06:58:10.692327] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18260: invalid cntlid range [6-5] 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/13 06:58:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode18260], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:02.815 request: 00:12:02.815 { 00:12:02.815 "method": "nvmf_create_subsystem", 00:12:02.815 "params": { 00:12:02.815 "nqn": "nqn.2016-06.io.spdk:cnode18260", 00:12:02.815 "min_cntlid": 6, 00:12:02.815 "max_cntlid": 5 00:12:02.815 } 00:12:02.815 } 00:12:02.815 Got JSON-RPC error response 00:12:02.815 GoRPCClient: error on JSON-RPC call' 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/13 06:58:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode18260], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:02.815 request: 00:12:02.815 { 00:12:02.815 "method": "nvmf_create_subsystem", 00:12:02.815 "params": { 00:12:02.815 "nqn": "nqn.2016-06.io.spdk:cnode18260", 00:12:02.815 "min_cntlid": 6, 00:12:02.815 "max_cntlid": 5 00:12:02.815 } 00:12:02.815 } 00:12:02.815 Got JSON-RPC error response 00:12:02.815 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:02.815 { 00:12:02.815 "name": "foobar", 00:12:02.815 "method": "nvmf_delete_target", 00:12:02.815 "req_id": 1 00:12:02.815 } 00:12:02.815 Got JSON-RPC error response 00:12:02.815 response: 00:12:02.815 { 00:12:02.815 "code": -32602, 00:12:02.815 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:02.815 }' 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:02.815 { 00:12:02.815 "name": "foobar", 00:12:02.815 "method": "nvmf_delete_target", 00:12:02.815 "req_id": 1 00:12:02.815 } 00:12:02.815 Got JSON-RPC error response 00:12:02.815 response: 00:12:02.815 { 00:12:02.815 "code": -32602, 00:12:02.815 "message": "The specified target doesn't exist, cannot delete it." 00:12:02.815 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.815 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.815 rmmod nvme_tcp 00:12:02.815 rmmod nvme_fabrics 00:12:03.073 rmmod nvme_keyring 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 83696 ']' 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 83696 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 83696 ']' 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 83696 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83696 00:12:03.073 killing process with pid 83696 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83696' 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 83696 00:12:03.073 06:58:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 83696 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:03.332 ************************************ 00:12:03.332 END TEST nvmf_invalid 00:12:03.332 ************************************ 00:12:03.332 00:12:03.332 real 0m5.844s 00:12:03.332 user 0m22.871s 00:12:03.332 sys 0m1.351s 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.332 06:58:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:03.332 06:58:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:03.332 06:58:11 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:03.332 06:58:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.332 06:58:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.332 06:58:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.332 ************************************ 00:12:03.332 START TEST nvmf_abort 00:12:03.332 ************************************ 00:12:03.332 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:03.591 * Looking for test storage... 00:12:03.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:03.592 Cannot find device "nvmf_tgt_br" 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:03.592 Cannot find device "nvmf_tgt_br2" 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:03.592 Cannot find device "nvmf_tgt_br" 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:03.592 Cannot find device "nvmf_tgt_br2" 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.592 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.851 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:03.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:03.852 00:12:03.852 --- 10.0.0.2 ping statistics --- 00:12:03.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.852 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:03.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:03.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:03.852 00:12:03.852 --- 10.0.0.3 ping statistics --- 00:12:03.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.852 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:03.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:03.852 00:12:03.852 --- 10.0.0.1 ping statistics --- 00:12:03.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.852 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:03.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=84214 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 84214 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 84214 ']' 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.852 06:58:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:03.852 [2024-07-13 06:58:11.888580] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:03.852 [2024-07-13 06:58:11.889133] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.110 [2024-07-13 06:58:12.032265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.110 [2024-07-13 06:58:12.164824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.110 [2024-07-13 06:58:12.165180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.110 [2024-07-13 06:58:12.165522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.110 [2024-07-13 06:58:12.165556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.110 [2024-07-13 06:58:12.165593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.110 [2024-07-13 06:58:12.165785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.111 [2024-07-13 06:58:12.166366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.111 [2024-07-13 06:58:12.166393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 [2024-07-13 06:58:12.965098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 Malloc0 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 Delay0 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 [2024-07-13 06:58:13.047575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.044 06:58:13 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:05.301 [2024-07-13 06:58:13.223813] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:07.199 Initializing NVMe Controllers 00:12:07.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:07.199 controller IO queue size 128 less than required 00:12:07.199 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:07.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:07.199 Initialization complete. Launching workers. 00:12:07.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36530 00:12:07.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36591, failed to submit 62 00:12:07.199 success 36534, unsuccess 57, failed 0 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.199 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.459 rmmod nvme_tcp 00:12:07.459 rmmod nvme_fabrics 00:12:07.459 rmmod nvme_keyring 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 84214 ']' 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 84214 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 84214 ']' 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 84214 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84214 00:12:07.459 killing process with pid 84214 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84214' 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 84214 00:12:07.459 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 84214 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:07.718 ************************************ 00:12:07.718 END TEST nvmf_abort 00:12:07.718 ************************************ 00:12:07.718 00:12:07.718 real 0m4.383s 00:12:07.718 user 0m12.522s 00:12:07.718 sys 0m1.024s 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.718 06:58:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:07.978 06:58:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:07.978 06:58:15 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:07.978 06:58:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.978 06:58:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.978 06:58:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.978 ************************************ 00:12:07.978 START TEST nvmf_ns_hotplug_stress 00:12:07.978 ************************************ 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:07.978 * Looking for test storage... 00:12:07.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.978 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:07.979 Cannot find device "nvmf_tgt_br" 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.979 Cannot find device "nvmf_tgt_br2" 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:07.979 Cannot find device "nvmf_tgt_br" 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:07.979 Cannot find device "nvmf_tgt_br2" 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:12:07.979 06:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:07.979 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:08.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:12:08.239 00:12:08.239 --- 10.0.0.2 ping statistics --- 00:12:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.239 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:08.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:08.239 00:12:08.239 --- 10.0.0.3 ping statistics --- 00:12:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.239 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:08.239 00:12:08.239 --- 10.0.0.1 ping statistics --- 00:12:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.239 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=84472 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 84472 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 84472 ']' 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.239 06:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.504 [2024-07-13 06:58:16.360522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:08.504 [2024-07-13 06:58:16.360640] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.504 [2024-07-13 06:58:16.499581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.794 [2024-07-13 06:58:16.634011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.794 [2024-07-13 06:58:16.634355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.794 [2024-07-13 06:58:16.634425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.794 [2024-07-13 06:58:16.634500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.794 [2024-07-13 06:58:16.634559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.794 [2024-07-13 06:58:16.635321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.794 [2024-07-13 06:58:16.635630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.794 [2024-07-13 06:58:16.635655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:09.367 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:09.627 [2024-07-13 06:58:17.686491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.886 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:10.144 06:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.144 [2024-07-13 06:58:18.183396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.144 06:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.402 06:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:10.659 Malloc0 00:12:10.659 06:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:10.916 Delay0 00:12:10.916 06:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.174 06:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:11.432 NULL1 00:12:11.432 06:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:11.690 06:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:11.690 06:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=84609 00:12:11.690 06:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:11.690 06:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.064 Read completed with error (sct=0, sc=11) 00:12:13.064 06:58:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.064 06:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:13.064 06:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:13.322 true 00:12:13.322 06:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:13.322 06:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.258 06:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.517 06:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:14.517 06:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:14.517 true 00:12:14.777 06:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:14.777 06:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.777 06:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.035 06:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:15.035 06:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:15.292 true 00:12:15.292 06:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:15.292 06:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.227 06:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.486 06:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:16.486 06:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:16.744 true 00:12:16.744 06:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:16.744 06:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.003 06:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.261 06:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:17.261 06:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:17.261 true 00:12:17.518 06:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:17.518 06:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.083 06:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.340 06:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:18.340 06:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:18.597 true 00:12:18.597 06:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:18.597 06:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.854 06:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.113 06:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:19.113 06:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:19.371 true 00:12:19.371 06:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:19.371 06:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.305 06:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.563 06:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:20.563 06:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:20.563 true 00:12:20.563 06:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:20.563 06:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.821 06:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.078 06:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:21.078 06:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:21.336 true 00:12:21.336 06:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:21.336 06:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.271 06:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.529 06:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:22.529 06:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:22.529 true 00:12:22.786 06:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:22.786 06:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.786 06:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.044 06:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:23.044 06:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:23.303 true 00:12:23.303 06:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:23.303 06:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.238 06:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.496 06:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:24.496 06:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:24.754 true 00:12:24.754 06:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:24.754 06:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.011 06:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.268 06:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:25.268 06:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:25.526 true 00:12:25.526 06:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:25.526 06:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.152 06:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.410 06:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:26.410 06:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:26.667 true 00:12:26.667 06:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:26.667 06:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.925 06:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.184 06:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:27.184 06:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:27.441 true 00:12:27.441 06:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:27.441 06:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.375 06:58:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.375 06:58:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:28.375 06:58:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:28.633 true 00:12:28.633 06:58:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:28.633 06:58:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.891 06:58:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.149 06:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:29.149 06:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:29.407 true 00:12:29.407 06:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:29.407 06:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.342 06:58:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.600 06:58:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:30.600 06:58:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:30.600 true 00:12:30.859 06:58:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:30.859 06:58:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.859 06:58:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.118 06:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:31.118 06:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:31.377 true 00:12:31.377 06:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:31.377 06:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.309 06:58:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.566 06:58:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:32.567 06:58:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:32.825 true 00:12:32.825 06:58:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:32.825 06:58:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.083 06:58:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.083 06:58:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:33.083 06:58:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:33.342 true 00:12:33.342 06:58:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:33.342 06:58:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.279 06:58:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.537 06:58:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:34.538 06:58:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:34.796 true 00:12:34.796 06:58:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:34.796 06:58:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.053 06:58:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.310 06:58:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:35.310 06:58:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:35.567 true 00:12:35.567 06:58:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:35.567 06:58:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.500 06:58:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.500 06:58:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:36.500 06:58:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:36.758 true 00:12:36.758 06:58:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:36.758 06:58:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.016 06:58:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.275 06:58:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:37.275 06:58:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:37.534 true 00:12:37.534 06:58:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:37.534 06:58:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.468 06:58:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.468 06:58:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:38.468 06:58:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:38.727 true 00:12:38.727 06:58:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:38.727 06:58:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.985 06:58:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.243 06:58:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:39.243 06:58:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:39.501 true 00:12:39.502 06:58:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:39.502 06:58:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.436 06:58:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.694 06:58:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:40.694 06:58:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:40.694 true 00:12:40.952 06:58:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:40.953 06:58:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.953 06:58:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.211 06:58:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:41.211 06:58:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:41.469 true 00:12:41.469 06:58:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:41.469 06:58:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.403 Initializing NVMe Controllers 00:12:42.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.403 Controller IO queue size 128, less than required. 00:12:42.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.403 Controller IO queue size 128, less than required. 00:12:42.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:42.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:42.403 Initialization complete. Launching workers. 00:12:42.403 ======================================================== 00:12:42.403 Latency(us) 00:12:42.403 Device Information : IOPS MiB/s Average min max 00:12:42.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 375.90 0.18 195023.13 3309.37 1032945.58 00:12:42.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12516.21 6.11 10226.80 3641.96 571865.90 00:12:42.403 ======================================================== 00:12:42.403 Total : 12892.10 6.29 15614.95 3309.37 1032945.58 00:12:42.403 00:12:42.403 06:58:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.661 06:58:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:12:42.661 06:58:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:12:42.919 true 00:12:42.919 06:58:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84609 00:12:42.919 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (84609) - No such process 00:12:42.919 06:58:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 84609 00:12:42.919 06:58:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.177 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:43.438 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:43.438 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:43.438 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:43.438 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.438 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:43.710 null0 00:12:43.710 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:43.710 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:43.710 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:44.002 null1 00:12:44.002 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.002 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.002 06:58:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:44.002 null2 00:12:44.002 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.002 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.002 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:44.260 null3 00:12:44.260 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.260 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.260 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:44.519 null4 00:12:44.519 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.519 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.519 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:44.777 null5 00:12:44.777 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:44.777 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:44.777 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:45.034 null6 00:12:45.034 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.034 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.034 06:58:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:45.291 null7 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:45.291 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:45.292 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 85661 85662 85664 85666 85668 85670 85672 85674 00:12:45.549 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:45.550 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.550 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:45.550 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.550 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:45.550 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:45.550 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:45.808 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:46.066 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.066 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.066 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:46.066 06:58:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:46.066 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.066 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.066 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:46.066 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.324 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:46.581 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:46.582 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:46.582 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:46.582 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.839 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:46.840 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:46.840 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:46.840 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:46.840 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:47.097 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.097 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.097 06:58:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:47.097 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.355 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:47.612 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:47.870 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:48.128 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.128 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.128 06:58:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.128 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.129 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:48.129 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:48.129 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:48.129 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.387 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.645 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:48.903 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:48.904 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:48.904 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:48.904 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:49.162 06:58:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.162 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.421 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:49.678 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:49.935 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.935 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.935 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:49.935 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.935 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.936 06:58:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:49.936 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:49.936 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:49.936 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:50.193 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.451 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.708 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.966 rmmod nvme_tcp 00:12:50.966 rmmod nvme_fabrics 00:12:50.966 rmmod nvme_keyring 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 84472 ']' 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 84472 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 84472 ']' 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 84472 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84472 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:50.966 killing process with pid 84472 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84472' 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 84472 00:12:50.966 06:58:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 84472 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:51.223 00:12:51.223 real 0m43.436s 00:12:51.223 user 3m26.182s 00:12:51.223 sys 0m12.685s 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.223 06:58:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.223 ************************************ 00:12:51.223 END TEST nvmf_ns_hotplug_stress 00:12:51.223 ************************************ 00:12:51.223 06:58:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:51.223 06:58:59 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:51.223 06:58:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.223 06:58:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.223 06:58:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:51.223 ************************************ 00:12:51.223 START TEST nvmf_connect_stress 00:12:51.223 ************************************ 00:12:51.223 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:51.481 * Looking for test storage... 00:12:51.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.481 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:51.482 Cannot find device "nvmf_tgt_br" 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:51.482 Cannot find device "nvmf_tgt_br2" 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:51.482 Cannot find device "nvmf_tgt_br" 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:51.482 Cannot find device "nvmf_tgt_br2" 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:51.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:51.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:51.482 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:51.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:12:51.740 00:12:51.740 --- 10.0.0.2 ping statistics --- 00:12:51.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.740 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:51.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:51.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:12:51.740 00:12:51.740 --- 10.0.0.3 ping statistics --- 00:12:51.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.740 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:51.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:51.740 00:12:51.740 --- 10.0.0.1 ping statistics --- 00:12:51.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.740 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=86972 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 86972 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 86972 ']' 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.740 06:58:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.998 [2024-07-13 06:58:59.827329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:51.998 [2024-07-13 06:58:59.827442] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.998 [2024-07-13 06:58:59.968268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.256 [2024-07-13 06:59:00.073771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.256 [2024-07-13 06:59:00.073829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.256 [2024-07-13 06:59:00.073843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.256 [2024-07-13 06:59:00.073853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.256 [2024-07-13 06:59:00.073868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.256 [2024-07-13 06:59:00.074041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.256 [2024-07-13 06:59:00.074809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.256 [2024-07-13 06:59:00.074826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.823 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.082 [2024-07-13 06:59:00.902130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.082 [2024-07-13 06:59:00.922240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.082 NULL1 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=87024 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.082 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.341 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.341 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:53.341 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.341 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.341 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.599 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.599 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:53.599 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.599 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.599 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.166 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.166 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:54.166 06:59:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.166 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.166 06:59:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.426 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.426 06:59:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:54.426 06:59:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.426 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.426 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.697 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.697 06:59:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:54.697 06:59:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.697 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.697 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.961 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.961 06:59:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:54.961 06:59:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.961 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.961 06:59:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.219 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.219 06:59:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:55.219 06:59:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.219 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.219 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.784 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.784 06:59:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:55.784 06:59:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.784 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.784 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.042 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.042 06:59:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:56.042 06:59:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.042 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.042 06:59:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.300 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.300 06:59:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:56.300 06:59:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.300 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.300 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.557 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.557 06:59:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:56.557 06:59:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.557 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.557 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.123 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.123 06:59:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:57.123 06:59:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.123 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.123 06:59:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.381 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.381 06:59:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:57.381 06:59:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.381 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.381 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.640 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.640 06:59:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:57.640 06:59:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.640 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.640 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.899 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.899 06:59:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:57.899 06:59:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.899 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.899 06:59:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.158 06:59:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:58.158 06:59:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.158 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.158 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.727 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.727 06:59:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:58.727 06:59:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.727 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.727 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:58.986 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.986 06:59:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:58.986 06:59:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:58.986 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.986 06:59:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.245 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.245 06:59:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:59.245 06:59:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.245 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.245 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.504 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.504 06:59:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:59.504 06:59:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.504 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.504 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.763 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.763 06:59:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:12:59.763 06:59:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:59.763 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.763 06:59:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.330 06:59:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:00.330 06:59:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.330 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.330 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.589 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.589 06:59:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:00.589 06:59:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.589 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.589 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.852 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.852 06:59:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:00.852 06:59:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.852 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.852 06:59:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.131 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.131 06:59:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:01.131 06:59:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.131 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.131 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.403 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.403 06:59:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:01.403 06:59:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.403 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.403 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.970 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.970 06:59:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:01.970 06:59:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.970 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.970 06:59:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.229 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.229 06:59:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:02.229 06:59:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.229 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.229 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.488 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.488 06:59:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:02.488 06:59:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.488 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.488 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.747 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.747 06:59:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:02.747 06:59:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.747 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.747 06:59:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.007 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.007 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:03.007 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.007 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.007 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.265 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87024 00:13:03.524 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (87024) - No such process 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 87024 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:03.524 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.525 rmmod nvme_tcp 00:13:03.525 rmmod nvme_fabrics 00:13:03.525 rmmod nvme_keyring 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 86972 ']' 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 86972 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 86972 ']' 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 86972 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86972 00:13:03.525 killing process with pid 86972 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86972' 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 86972 00:13:03.525 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 86972 00:13:03.783 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:03.784 00:13:03.784 real 0m12.461s 00:13:03.784 user 0m41.662s 00:13:03.784 sys 0m3.151s 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.784 ************************************ 00:13:03.784 END TEST nvmf_connect_stress 00:13:03.784 ************************************ 00:13:03.784 06:59:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.784 06:59:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:03.784 06:59:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:03.784 06:59:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.784 06:59:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.784 06:59:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:03.784 ************************************ 00:13:03.784 START TEST nvmf_fused_ordering 00:13:03.784 ************************************ 00:13:03.784 06:59:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:04.043 * Looking for test storage... 00:13:04.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.043 06:59:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:04.044 Cannot find device "nvmf_tgt_br" 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:04.044 Cannot find device "nvmf_tgt_br2" 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:04.044 Cannot find device "nvmf_tgt_br" 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:04.044 Cannot find device "nvmf_tgt_br2" 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:13:04.044 06:59:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:04.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:04.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:04.044 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:04.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:13:04.303 00:13:04.303 --- 10.0.0.2 ping statistics --- 00:13:04.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.303 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:04.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:04.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:04.303 00:13:04.303 --- 10.0.0.3 ping statistics --- 00:13:04.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.303 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:04.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:04.303 00:13:04.303 --- 10.0.0.1 ping statistics --- 00:13:04.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.303 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.303 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=87346 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 87346 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 87346 ']' 00:13:04.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.304 06:59:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.304 [2024-07-13 06:59:12.338886] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:04.304 [2024-07-13 06:59:12.338961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.562 [2024-07-13 06:59:12.479851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.562 [2024-07-13 06:59:12.566751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.562 [2024-07-13 06:59:12.566789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.562 [2024-07-13 06:59:12.566800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.562 [2024-07-13 06:59:12.566809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.562 [2024-07-13 06:59:12.566816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.562 [2024-07-13 06:59:12.566840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 [2024-07-13 06:59:13.401174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 [2024-07-13 06:59:13.417265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 NULL1 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.496 06:59:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:05.496 [2024-07-13 06:59:13.469952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:05.496 [2024-07-13 06:59:13.469998] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87396 ] 00:13:06.069 Attached to nqn.2016-06.io.spdk:cnode1 00:13:06.069 Namespace ID: 1 size: 1GB 00:13:06.069 fused_ordering(0) 00:13:06.069 fused_ordering(1) 00:13:06.069 fused_ordering(2) 00:13:06.069 fused_ordering(3) 00:13:06.069 fused_ordering(4) 00:13:06.069 fused_ordering(5) 00:13:06.069 fused_ordering(6) 00:13:06.069 fused_ordering(7) 00:13:06.069 fused_ordering(8) 00:13:06.069 fused_ordering(9) 00:13:06.069 fused_ordering(10) 00:13:06.069 fused_ordering(11) 00:13:06.069 fused_ordering(12) 00:13:06.069 fused_ordering(13) 00:13:06.069 fused_ordering(14) 00:13:06.069 fused_ordering(15) 00:13:06.069 fused_ordering(16) 00:13:06.069 fused_ordering(17) 00:13:06.069 fused_ordering(18) 00:13:06.069 fused_ordering(19) 00:13:06.069 fused_ordering(20) 00:13:06.069 fused_ordering(21) 00:13:06.069 fused_ordering(22) 00:13:06.069 fused_ordering(23) 00:13:06.069 fused_ordering(24) 00:13:06.070 fused_ordering(25) 00:13:06.070 fused_ordering(26) 00:13:06.070 fused_ordering(27) 00:13:06.070 fused_ordering(28) 00:13:06.070 fused_ordering(29) 00:13:06.070 fused_ordering(30) 00:13:06.070 fused_ordering(31) 00:13:06.070 fused_ordering(32) 00:13:06.070 fused_ordering(33) 00:13:06.070 fused_ordering(34) 00:13:06.070 fused_ordering(35) 00:13:06.070 fused_ordering(36) 00:13:06.070 fused_ordering(37) 00:13:06.070 fused_ordering(38) 00:13:06.070 fused_ordering(39) 00:13:06.070 fused_ordering(40) 00:13:06.070 fused_ordering(41) 00:13:06.070 fused_ordering(42) 00:13:06.070 fused_ordering(43) 00:13:06.070 fused_ordering(44) 00:13:06.070 fused_ordering(45) 00:13:06.070 fused_ordering(46) 00:13:06.070 fused_ordering(47) 00:13:06.070 fused_ordering(48) 00:13:06.070 fused_ordering(49) 00:13:06.070 fused_ordering(50) 00:13:06.070 fused_ordering(51) 00:13:06.070 fused_ordering(52) 00:13:06.070 fused_ordering(53) 00:13:06.070 fused_ordering(54) 00:13:06.070 fused_ordering(55) 00:13:06.070 fused_ordering(56) 00:13:06.070 fused_ordering(57) 00:13:06.070 fused_ordering(58) 00:13:06.070 fused_ordering(59) 00:13:06.070 fused_ordering(60) 00:13:06.070 fused_ordering(61) 00:13:06.070 fused_ordering(62) 00:13:06.070 fused_ordering(63) 00:13:06.070 fused_ordering(64) 00:13:06.070 fused_ordering(65) 00:13:06.070 fused_ordering(66) 00:13:06.070 fused_ordering(67) 00:13:06.070 fused_ordering(68) 00:13:06.070 fused_ordering(69) 00:13:06.070 fused_ordering(70) 00:13:06.070 fused_ordering(71) 00:13:06.070 fused_ordering(72) 00:13:06.070 fused_ordering(73) 00:13:06.070 fused_ordering(74) 00:13:06.070 fused_ordering(75) 00:13:06.070 fused_ordering(76) 00:13:06.070 fused_ordering(77) 00:13:06.070 fused_ordering(78) 00:13:06.070 fused_ordering(79) 00:13:06.070 fused_ordering(80) 00:13:06.070 fused_ordering(81) 00:13:06.070 fused_ordering(82) 00:13:06.070 fused_ordering(83) 00:13:06.070 fused_ordering(84) 00:13:06.070 fused_ordering(85) 00:13:06.070 fused_ordering(86) 00:13:06.070 fused_ordering(87) 00:13:06.070 fused_ordering(88) 00:13:06.070 fused_ordering(89) 00:13:06.070 fused_ordering(90) 00:13:06.070 fused_ordering(91) 00:13:06.070 fused_ordering(92) 00:13:06.070 fused_ordering(93) 00:13:06.070 fused_ordering(94) 00:13:06.070 fused_ordering(95) 00:13:06.070 fused_ordering(96) 00:13:06.070 fused_ordering(97) 00:13:06.070 fused_ordering(98) 00:13:06.070 fused_ordering(99) 00:13:06.070 fused_ordering(100) 00:13:06.070 fused_ordering(101) 00:13:06.070 fused_ordering(102) 00:13:06.070 fused_ordering(103) 00:13:06.070 fused_ordering(104) 00:13:06.070 fused_ordering(105) 00:13:06.070 fused_ordering(106) 00:13:06.070 fused_ordering(107) 00:13:06.070 fused_ordering(108) 00:13:06.070 fused_ordering(109) 00:13:06.070 fused_ordering(110) 00:13:06.070 fused_ordering(111) 00:13:06.070 fused_ordering(112) 00:13:06.070 fused_ordering(113) 00:13:06.070 fused_ordering(114) 00:13:06.070 fused_ordering(115) 00:13:06.070 fused_ordering(116) 00:13:06.070 fused_ordering(117) 00:13:06.070 fused_ordering(118) 00:13:06.070 fused_ordering(119) 00:13:06.070 fused_ordering(120) 00:13:06.070 fused_ordering(121) 00:13:06.070 fused_ordering(122) 00:13:06.070 fused_ordering(123) 00:13:06.070 fused_ordering(124) 00:13:06.070 fused_ordering(125) 00:13:06.070 fused_ordering(126) 00:13:06.070 fused_ordering(127) 00:13:06.070 fused_ordering(128) 00:13:06.070 fused_ordering(129) 00:13:06.070 fused_ordering(130) 00:13:06.070 fused_ordering(131) 00:13:06.070 fused_ordering(132) 00:13:06.070 fused_ordering(133) 00:13:06.070 fused_ordering(134) 00:13:06.070 fused_ordering(135) 00:13:06.070 fused_ordering(136) 00:13:06.070 fused_ordering(137) 00:13:06.070 fused_ordering(138) 00:13:06.070 fused_ordering(139) 00:13:06.070 fused_ordering(140) 00:13:06.070 fused_ordering(141) 00:13:06.070 fused_ordering(142) 00:13:06.070 fused_ordering(143) 00:13:06.070 fused_ordering(144) 00:13:06.070 fused_ordering(145) 00:13:06.070 fused_ordering(146) 00:13:06.070 fused_ordering(147) 00:13:06.070 fused_ordering(148) 00:13:06.070 fused_ordering(149) 00:13:06.070 fused_ordering(150) 00:13:06.070 fused_ordering(151) 00:13:06.070 fused_ordering(152) 00:13:06.070 fused_ordering(153) 00:13:06.070 fused_ordering(154) 00:13:06.070 fused_ordering(155) 00:13:06.070 fused_ordering(156) 00:13:06.070 fused_ordering(157) 00:13:06.070 fused_ordering(158) 00:13:06.070 fused_ordering(159) 00:13:06.070 fused_ordering(160) 00:13:06.070 fused_ordering(161) 00:13:06.070 fused_ordering(162) 00:13:06.070 fused_ordering(163) 00:13:06.070 fused_ordering(164) 00:13:06.070 fused_ordering(165) 00:13:06.070 fused_ordering(166) 00:13:06.070 fused_ordering(167) 00:13:06.070 fused_ordering(168) 00:13:06.070 fused_ordering(169) 00:13:06.070 fused_ordering(170) 00:13:06.070 fused_ordering(171) 00:13:06.070 fused_ordering(172) 00:13:06.070 fused_ordering(173) 00:13:06.070 fused_ordering(174) 00:13:06.070 fused_ordering(175) 00:13:06.070 fused_ordering(176) 00:13:06.070 fused_ordering(177) 00:13:06.070 fused_ordering(178) 00:13:06.070 fused_ordering(179) 00:13:06.070 fused_ordering(180) 00:13:06.070 fused_ordering(181) 00:13:06.070 fused_ordering(182) 00:13:06.070 fused_ordering(183) 00:13:06.070 fused_ordering(184) 00:13:06.070 fused_ordering(185) 00:13:06.070 fused_ordering(186) 00:13:06.070 fused_ordering(187) 00:13:06.070 fused_ordering(188) 00:13:06.070 fused_ordering(189) 00:13:06.070 fused_ordering(190) 00:13:06.070 fused_ordering(191) 00:13:06.070 fused_ordering(192) 00:13:06.070 fused_ordering(193) 00:13:06.070 fused_ordering(194) 00:13:06.070 fused_ordering(195) 00:13:06.070 fused_ordering(196) 00:13:06.070 fused_ordering(197) 00:13:06.070 fused_ordering(198) 00:13:06.070 fused_ordering(199) 00:13:06.070 fused_ordering(200) 00:13:06.070 fused_ordering(201) 00:13:06.070 fused_ordering(202) 00:13:06.070 fused_ordering(203) 00:13:06.070 fused_ordering(204) 00:13:06.070 fused_ordering(205) 00:13:06.328 fused_ordering(206) 00:13:06.328 fused_ordering(207) 00:13:06.328 fused_ordering(208) 00:13:06.328 fused_ordering(209) 00:13:06.328 fused_ordering(210) 00:13:06.328 fused_ordering(211) 00:13:06.328 fused_ordering(212) 00:13:06.328 fused_ordering(213) 00:13:06.328 fused_ordering(214) 00:13:06.328 fused_ordering(215) 00:13:06.328 fused_ordering(216) 00:13:06.328 fused_ordering(217) 00:13:06.328 fused_ordering(218) 00:13:06.328 fused_ordering(219) 00:13:06.328 fused_ordering(220) 00:13:06.328 fused_ordering(221) 00:13:06.328 fused_ordering(222) 00:13:06.328 fused_ordering(223) 00:13:06.328 fused_ordering(224) 00:13:06.328 fused_ordering(225) 00:13:06.328 fused_ordering(226) 00:13:06.328 fused_ordering(227) 00:13:06.328 fused_ordering(228) 00:13:06.328 fused_ordering(229) 00:13:06.328 fused_ordering(230) 00:13:06.328 fused_ordering(231) 00:13:06.328 fused_ordering(232) 00:13:06.328 fused_ordering(233) 00:13:06.328 fused_ordering(234) 00:13:06.328 fused_ordering(235) 00:13:06.328 fused_ordering(236) 00:13:06.328 fused_ordering(237) 00:13:06.328 fused_ordering(238) 00:13:06.328 fused_ordering(239) 00:13:06.328 fused_ordering(240) 00:13:06.328 fused_ordering(241) 00:13:06.328 fused_ordering(242) 00:13:06.328 fused_ordering(243) 00:13:06.328 fused_ordering(244) 00:13:06.328 fused_ordering(245) 00:13:06.328 fused_ordering(246) 00:13:06.328 fused_ordering(247) 00:13:06.328 fused_ordering(248) 00:13:06.328 fused_ordering(249) 00:13:06.328 fused_ordering(250) 00:13:06.328 fused_ordering(251) 00:13:06.328 fused_ordering(252) 00:13:06.328 fused_ordering(253) 00:13:06.328 fused_ordering(254) 00:13:06.328 fused_ordering(255) 00:13:06.328 fused_ordering(256) 00:13:06.328 fused_ordering(257) 00:13:06.328 fused_ordering(258) 00:13:06.328 fused_ordering(259) 00:13:06.328 fused_ordering(260) 00:13:06.328 fused_ordering(261) 00:13:06.328 fused_ordering(262) 00:13:06.328 fused_ordering(263) 00:13:06.328 fused_ordering(264) 00:13:06.328 fused_ordering(265) 00:13:06.328 fused_ordering(266) 00:13:06.328 fused_ordering(267) 00:13:06.328 fused_ordering(268) 00:13:06.328 fused_ordering(269) 00:13:06.328 fused_ordering(270) 00:13:06.328 fused_ordering(271) 00:13:06.328 fused_ordering(272) 00:13:06.328 fused_ordering(273) 00:13:06.328 fused_ordering(274) 00:13:06.328 fused_ordering(275) 00:13:06.328 fused_ordering(276) 00:13:06.328 fused_ordering(277) 00:13:06.328 fused_ordering(278) 00:13:06.328 fused_ordering(279) 00:13:06.328 fused_ordering(280) 00:13:06.328 fused_ordering(281) 00:13:06.328 fused_ordering(282) 00:13:06.328 fused_ordering(283) 00:13:06.328 fused_ordering(284) 00:13:06.328 fused_ordering(285) 00:13:06.328 fused_ordering(286) 00:13:06.328 fused_ordering(287) 00:13:06.328 fused_ordering(288) 00:13:06.328 fused_ordering(289) 00:13:06.328 fused_ordering(290) 00:13:06.328 fused_ordering(291) 00:13:06.328 fused_ordering(292) 00:13:06.328 fused_ordering(293) 00:13:06.328 fused_ordering(294) 00:13:06.328 fused_ordering(295) 00:13:06.328 fused_ordering(296) 00:13:06.328 fused_ordering(297) 00:13:06.328 fused_ordering(298) 00:13:06.328 fused_ordering(299) 00:13:06.328 fused_ordering(300) 00:13:06.328 fused_ordering(301) 00:13:06.328 fused_ordering(302) 00:13:06.328 fused_ordering(303) 00:13:06.328 fused_ordering(304) 00:13:06.328 fused_ordering(305) 00:13:06.328 fused_ordering(306) 00:13:06.328 fused_ordering(307) 00:13:06.328 fused_ordering(308) 00:13:06.328 fused_ordering(309) 00:13:06.328 fused_ordering(310) 00:13:06.328 fused_ordering(311) 00:13:06.328 fused_ordering(312) 00:13:06.328 fused_ordering(313) 00:13:06.328 fused_ordering(314) 00:13:06.328 fused_ordering(315) 00:13:06.328 fused_ordering(316) 00:13:06.328 fused_ordering(317) 00:13:06.328 fused_ordering(318) 00:13:06.328 fused_ordering(319) 00:13:06.328 fused_ordering(320) 00:13:06.328 fused_ordering(321) 00:13:06.328 fused_ordering(322) 00:13:06.328 fused_ordering(323) 00:13:06.328 fused_ordering(324) 00:13:06.328 fused_ordering(325) 00:13:06.328 fused_ordering(326) 00:13:06.328 fused_ordering(327) 00:13:06.328 fused_ordering(328) 00:13:06.328 fused_ordering(329) 00:13:06.328 fused_ordering(330) 00:13:06.328 fused_ordering(331) 00:13:06.328 fused_ordering(332) 00:13:06.328 fused_ordering(333) 00:13:06.328 fused_ordering(334) 00:13:06.328 fused_ordering(335) 00:13:06.328 fused_ordering(336) 00:13:06.328 fused_ordering(337) 00:13:06.328 fused_ordering(338) 00:13:06.328 fused_ordering(339) 00:13:06.328 fused_ordering(340) 00:13:06.328 fused_ordering(341) 00:13:06.328 fused_ordering(342) 00:13:06.328 fused_ordering(343) 00:13:06.328 fused_ordering(344) 00:13:06.328 fused_ordering(345) 00:13:06.328 fused_ordering(346) 00:13:06.328 fused_ordering(347) 00:13:06.328 fused_ordering(348) 00:13:06.328 fused_ordering(349) 00:13:06.328 fused_ordering(350) 00:13:06.328 fused_ordering(351) 00:13:06.328 fused_ordering(352) 00:13:06.328 fused_ordering(353) 00:13:06.328 fused_ordering(354) 00:13:06.328 fused_ordering(355) 00:13:06.328 fused_ordering(356) 00:13:06.328 fused_ordering(357) 00:13:06.328 fused_ordering(358) 00:13:06.328 fused_ordering(359) 00:13:06.328 fused_ordering(360) 00:13:06.328 fused_ordering(361) 00:13:06.328 fused_ordering(362) 00:13:06.328 fused_ordering(363) 00:13:06.328 fused_ordering(364) 00:13:06.328 fused_ordering(365) 00:13:06.328 fused_ordering(366) 00:13:06.328 fused_ordering(367) 00:13:06.328 fused_ordering(368) 00:13:06.328 fused_ordering(369) 00:13:06.328 fused_ordering(370) 00:13:06.328 fused_ordering(371) 00:13:06.328 fused_ordering(372) 00:13:06.328 fused_ordering(373) 00:13:06.328 fused_ordering(374) 00:13:06.328 fused_ordering(375) 00:13:06.328 fused_ordering(376) 00:13:06.328 fused_ordering(377) 00:13:06.328 fused_ordering(378) 00:13:06.328 fused_ordering(379) 00:13:06.328 fused_ordering(380) 00:13:06.328 fused_ordering(381) 00:13:06.328 fused_ordering(382) 00:13:06.328 fused_ordering(383) 00:13:06.328 fused_ordering(384) 00:13:06.328 fused_ordering(385) 00:13:06.328 fused_ordering(386) 00:13:06.328 fused_ordering(387) 00:13:06.328 fused_ordering(388) 00:13:06.328 fused_ordering(389) 00:13:06.328 fused_ordering(390) 00:13:06.328 fused_ordering(391) 00:13:06.328 fused_ordering(392) 00:13:06.328 fused_ordering(393) 00:13:06.328 fused_ordering(394) 00:13:06.328 fused_ordering(395) 00:13:06.328 fused_ordering(396) 00:13:06.328 fused_ordering(397) 00:13:06.328 fused_ordering(398) 00:13:06.328 fused_ordering(399) 00:13:06.328 fused_ordering(400) 00:13:06.328 fused_ordering(401) 00:13:06.328 fused_ordering(402) 00:13:06.328 fused_ordering(403) 00:13:06.328 fused_ordering(404) 00:13:06.328 fused_ordering(405) 00:13:06.328 fused_ordering(406) 00:13:06.328 fused_ordering(407) 00:13:06.328 fused_ordering(408) 00:13:06.328 fused_ordering(409) 00:13:06.328 fused_ordering(410) 00:13:06.586 fused_ordering(411) 00:13:06.586 fused_ordering(412) 00:13:06.586 fused_ordering(413) 00:13:06.586 fused_ordering(414) 00:13:06.586 fused_ordering(415) 00:13:06.586 fused_ordering(416) 00:13:06.586 fused_ordering(417) 00:13:06.586 fused_ordering(418) 00:13:06.586 fused_ordering(419) 00:13:06.586 fused_ordering(420) 00:13:06.586 fused_ordering(421) 00:13:06.586 fused_ordering(422) 00:13:06.586 fused_ordering(423) 00:13:06.586 fused_ordering(424) 00:13:06.586 fused_ordering(425) 00:13:06.586 fused_ordering(426) 00:13:06.586 fused_ordering(427) 00:13:06.586 fused_ordering(428) 00:13:06.586 fused_ordering(429) 00:13:06.586 fused_ordering(430) 00:13:06.586 fused_ordering(431) 00:13:06.586 fused_ordering(432) 00:13:06.586 fused_ordering(433) 00:13:06.586 fused_ordering(434) 00:13:06.586 fused_ordering(435) 00:13:06.586 fused_ordering(436) 00:13:06.586 fused_ordering(437) 00:13:06.586 fused_ordering(438) 00:13:06.586 fused_ordering(439) 00:13:06.586 fused_ordering(440) 00:13:06.586 fused_ordering(441) 00:13:06.586 fused_ordering(442) 00:13:06.586 fused_ordering(443) 00:13:06.586 fused_ordering(444) 00:13:06.586 fused_ordering(445) 00:13:06.586 fused_ordering(446) 00:13:06.586 fused_ordering(447) 00:13:06.586 fused_ordering(448) 00:13:06.586 fused_ordering(449) 00:13:06.586 fused_ordering(450) 00:13:06.586 fused_ordering(451) 00:13:06.586 fused_ordering(452) 00:13:06.586 fused_ordering(453) 00:13:06.586 fused_ordering(454) 00:13:06.586 fused_ordering(455) 00:13:06.586 fused_ordering(456) 00:13:06.586 fused_ordering(457) 00:13:06.586 fused_ordering(458) 00:13:06.586 fused_ordering(459) 00:13:06.586 fused_ordering(460) 00:13:06.586 fused_ordering(461) 00:13:06.586 fused_ordering(462) 00:13:06.586 fused_ordering(463) 00:13:06.586 fused_ordering(464) 00:13:06.586 fused_ordering(465) 00:13:06.586 fused_ordering(466) 00:13:06.586 fused_ordering(467) 00:13:06.586 fused_ordering(468) 00:13:06.586 fused_ordering(469) 00:13:06.586 fused_ordering(470) 00:13:06.586 fused_ordering(471) 00:13:06.586 fused_ordering(472) 00:13:06.586 fused_ordering(473) 00:13:06.586 fused_ordering(474) 00:13:06.586 fused_ordering(475) 00:13:06.586 fused_ordering(476) 00:13:06.586 fused_ordering(477) 00:13:06.586 fused_ordering(478) 00:13:06.586 fused_ordering(479) 00:13:06.586 fused_ordering(480) 00:13:06.586 fused_ordering(481) 00:13:06.586 fused_ordering(482) 00:13:06.586 fused_ordering(483) 00:13:06.586 fused_ordering(484) 00:13:06.586 fused_ordering(485) 00:13:06.586 fused_ordering(486) 00:13:06.586 fused_ordering(487) 00:13:06.586 fused_ordering(488) 00:13:06.586 fused_ordering(489) 00:13:06.586 fused_ordering(490) 00:13:06.586 fused_ordering(491) 00:13:06.586 fused_ordering(492) 00:13:06.586 fused_ordering(493) 00:13:06.586 fused_ordering(494) 00:13:06.586 fused_ordering(495) 00:13:06.586 fused_ordering(496) 00:13:06.586 fused_ordering(497) 00:13:06.586 fused_ordering(498) 00:13:06.586 fused_ordering(499) 00:13:06.586 fused_ordering(500) 00:13:06.586 fused_ordering(501) 00:13:06.586 fused_ordering(502) 00:13:06.586 fused_ordering(503) 00:13:06.586 fused_ordering(504) 00:13:06.586 fused_ordering(505) 00:13:06.586 fused_ordering(506) 00:13:06.586 fused_ordering(507) 00:13:06.586 fused_ordering(508) 00:13:06.586 fused_ordering(509) 00:13:06.586 fused_ordering(510) 00:13:06.586 fused_ordering(511) 00:13:06.586 fused_ordering(512) 00:13:06.586 fused_ordering(513) 00:13:06.586 fused_ordering(514) 00:13:06.586 fused_ordering(515) 00:13:06.586 fused_ordering(516) 00:13:06.586 fused_ordering(517) 00:13:06.586 fused_ordering(518) 00:13:06.586 fused_ordering(519) 00:13:06.586 fused_ordering(520) 00:13:06.586 fused_ordering(521) 00:13:06.586 fused_ordering(522) 00:13:06.586 fused_ordering(523) 00:13:06.586 fused_ordering(524) 00:13:06.586 fused_ordering(525) 00:13:06.586 fused_ordering(526) 00:13:06.586 fused_ordering(527) 00:13:06.586 fused_ordering(528) 00:13:06.586 fused_ordering(529) 00:13:06.586 fused_ordering(530) 00:13:06.586 fused_ordering(531) 00:13:06.586 fused_ordering(532) 00:13:06.586 fused_ordering(533) 00:13:06.586 fused_ordering(534) 00:13:06.586 fused_ordering(535) 00:13:06.586 fused_ordering(536) 00:13:06.586 fused_ordering(537) 00:13:06.586 fused_ordering(538) 00:13:06.586 fused_ordering(539) 00:13:06.586 fused_ordering(540) 00:13:06.586 fused_ordering(541) 00:13:06.586 fused_ordering(542) 00:13:06.586 fused_ordering(543) 00:13:06.586 fused_ordering(544) 00:13:06.586 fused_ordering(545) 00:13:06.586 fused_ordering(546) 00:13:06.586 fused_ordering(547) 00:13:06.586 fused_ordering(548) 00:13:06.586 fused_ordering(549) 00:13:06.586 fused_ordering(550) 00:13:06.586 fused_ordering(551) 00:13:06.586 fused_ordering(552) 00:13:06.586 fused_ordering(553) 00:13:06.586 fused_ordering(554) 00:13:06.586 fused_ordering(555) 00:13:06.586 fused_ordering(556) 00:13:06.586 fused_ordering(557) 00:13:06.586 fused_ordering(558) 00:13:06.586 fused_ordering(559) 00:13:06.586 fused_ordering(560) 00:13:06.586 fused_ordering(561) 00:13:06.586 fused_ordering(562) 00:13:06.586 fused_ordering(563) 00:13:06.586 fused_ordering(564) 00:13:06.586 fused_ordering(565) 00:13:06.586 fused_ordering(566) 00:13:06.586 fused_ordering(567) 00:13:06.586 fused_ordering(568) 00:13:06.586 fused_ordering(569) 00:13:06.586 fused_ordering(570) 00:13:06.586 fused_ordering(571) 00:13:06.586 fused_ordering(572) 00:13:06.586 fused_ordering(573) 00:13:06.586 fused_ordering(574) 00:13:06.586 fused_ordering(575) 00:13:06.586 fused_ordering(576) 00:13:06.586 fused_ordering(577) 00:13:06.586 fused_ordering(578) 00:13:06.586 fused_ordering(579) 00:13:06.586 fused_ordering(580) 00:13:06.586 fused_ordering(581) 00:13:06.586 fused_ordering(582) 00:13:06.586 fused_ordering(583) 00:13:06.586 fused_ordering(584) 00:13:06.586 fused_ordering(585) 00:13:06.586 fused_ordering(586) 00:13:06.586 fused_ordering(587) 00:13:06.586 fused_ordering(588) 00:13:06.586 fused_ordering(589) 00:13:06.586 fused_ordering(590) 00:13:06.586 fused_ordering(591) 00:13:06.586 fused_ordering(592) 00:13:06.586 fused_ordering(593) 00:13:06.586 fused_ordering(594) 00:13:06.586 fused_ordering(595) 00:13:06.586 fused_ordering(596) 00:13:06.586 fused_ordering(597) 00:13:06.586 fused_ordering(598) 00:13:06.586 fused_ordering(599) 00:13:06.586 fused_ordering(600) 00:13:06.586 fused_ordering(601) 00:13:06.586 fused_ordering(602) 00:13:06.586 fused_ordering(603) 00:13:06.586 fused_ordering(604) 00:13:06.586 fused_ordering(605) 00:13:06.586 fused_ordering(606) 00:13:06.586 fused_ordering(607) 00:13:06.586 fused_ordering(608) 00:13:06.586 fused_ordering(609) 00:13:06.586 fused_ordering(610) 00:13:06.586 fused_ordering(611) 00:13:06.586 fused_ordering(612) 00:13:06.586 fused_ordering(613) 00:13:06.586 fused_ordering(614) 00:13:06.586 fused_ordering(615) 00:13:06.845 fused_ordering(616) 00:13:06.845 fused_ordering(617) 00:13:06.845 fused_ordering(618) 00:13:06.845 fused_ordering(619) 00:13:06.845 fused_ordering(620) 00:13:06.845 fused_ordering(621) 00:13:06.845 fused_ordering(622) 00:13:06.845 fused_ordering(623) 00:13:06.845 fused_ordering(624) 00:13:06.845 fused_ordering(625) 00:13:06.845 fused_ordering(626) 00:13:06.845 fused_ordering(627) 00:13:06.845 fused_ordering(628) 00:13:06.845 fused_ordering(629) 00:13:06.845 fused_ordering(630) 00:13:06.845 fused_ordering(631) 00:13:06.845 fused_ordering(632) 00:13:06.845 fused_ordering(633) 00:13:06.845 fused_ordering(634) 00:13:06.845 fused_ordering(635) 00:13:06.845 fused_ordering(636) 00:13:06.845 fused_ordering(637) 00:13:06.845 fused_ordering(638) 00:13:06.845 fused_ordering(639) 00:13:06.845 fused_ordering(640) 00:13:06.845 fused_ordering(641) 00:13:06.845 fused_ordering(642) 00:13:06.845 fused_ordering(643) 00:13:06.845 fused_ordering(644) 00:13:06.845 fused_ordering(645) 00:13:06.845 fused_ordering(646) 00:13:06.845 fused_ordering(647) 00:13:06.845 fused_ordering(648) 00:13:06.845 fused_ordering(649) 00:13:06.845 fused_ordering(650) 00:13:06.845 fused_ordering(651) 00:13:06.845 fused_ordering(652) 00:13:06.845 fused_ordering(653) 00:13:06.845 fused_ordering(654) 00:13:06.845 fused_ordering(655) 00:13:06.845 fused_ordering(656) 00:13:06.845 fused_ordering(657) 00:13:06.845 fused_ordering(658) 00:13:06.845 fused_ordering(659) 00:13:06.845 fused_ordering(660) 00:13:06.845 fused_ordering(661) 00:13:06.845 fused_ordering(662) 00:13:06.845 fused_ordering(663) 00:13:06.845 fused_ordering(664) 00:13:06.845 fused_ordering(665) 00:13:06.845 fused_ordering(666) 00:13:06.845 fused_ordering(667) 00:13:06.845 fused_ordering(668) 00:13:06.845 fused_ordering(669) 00:13:06.845 fused_ordering(670) 00:13:06.845 fused_ordering(671) 00:13:06.845 fused_ordering(672) 00:13:06.845 fused_ordering(673) 00:13:06.845 fused_ordering(674) 00:13:06.845 fused_ordering(675) 00:13:06.845 fused_ordering(676) 00:13:06.845 fused_ordering(677) 00:13:06.845 fused_ordering(678) 00:13:06.845 fused_ordering(679) 00:13:06.845 fused_ordering(680) 00:13:06.845 fused_ordering(681) 00:13:06.845 fused_ordering(682) 00:13:06.845 fused_ordering(683) 00:13:06.845 fused_ordering(684) 00:13:06.845 fused_ordering(685) 00:13:06.845 fused_ordering(686) 00:13:06.845 fused_ordering(687) 00:13:06.845 fused_ordering(688) 00:13:06.845 fused_ordering(689) 00:13:06.845 fused_ordering(690) 00:13:06.845 fused_ordering(691) 00:13:06.845 fused_ordering(692) 00:13:06.845 fused_ordering(693) 00:13:06.845 fused_ordering(694) 00:13:06.845 fused_ordering(695) 00:13:06.845 fused_ordering(696) 00:13:06.845 fused_ordering(697) 00:13:06.845 fused_ordering(698) 00:13:06.845 fused_ordering(699) 00:13:06.845 fused_ordering(700) 00:13:06.845 fused_ordering(701) 00:13:06.845 fused_ordering(702) 00:13:06.845 fused_ordering(703) 00:13:06.845 fused_ordering(704) 00:13:06.845 fused_ordering(705) 00:13:06.845 fused_ordering(706) 00:13:06.845 fused_ordering(707) 00:13:06.845 fused_ordering(708) 00:13:06.845 fused_ordering(709) 00:13:06.845 fused_ordering(710) 00:13:06.845 fused_ordering(711) 00:13:06.845 fused_ordering(712) 00:13:06.845 fused_ordering(713) 00:13:06.845 fused_ordering(714) 00:13:06.845 fused_ordering(715) 00:13:06.845 fused_ordering(716) 00:13:06.845 fused_ordering(717) 00:13:06.845 fused_ordering(718) 00:13:06.845 fused_ordering(719) 00:13:06.845 fused_ordering(720) 00:13:06.845 fused_ordering(721) 00:13:06.845 fused_ordering(722) 00:13:06.845 fused_ordering(723) 00:13:06.845 fused_ordering(724) 00:13:06.845 fused_ordering(725) 00:13:06.845 fused_ordering(726) 00:13:06.845 fused_ordering(727) 00:13:06.845 fused_ordering(728) 00:13:06.845 fused_ordering(729) 00:13:06.845 fused_ordering(730) 00:13:06.845 fused_ordering(731) 00:13:06.845 fused_ordering(732) 00:13:06.845 fused_ordering(733) 00:13:06.845 fused_ordering(734) 00:13:06.845 fused_ordering(735) 00:13:06.845 fused_ordering(736) 00:13:06.845 fused_ordering(737) 00:13:06.845 fused_ordering(738) 00:13:06.845 fused_ordering(739) 00:13:06.845 fused_ordering(740) 00:13:06.845 fused_ordering(741) 00:13:06.845 fused_ordering(742) 00:13:06.845 fused_ordering(743) 00:13:06.845 fused_ordering(744) 00:13:06.845 fused_ordering(745) 00:13:06.845 fused_ordering(746) 00:13:06.845 fused_ordering(747) 00:13:06.845 fused_ordering(748) 00:13:06.845 fused_ordering(749) 00:13:06.845 fused_ordering(750) 00:13:06.845 fused_ordering(751) 00:13:06.845 fused_ordering(752) 00:13:06.845 fused_ordering(753) 00:13:06.845 fused_ordering(754) 00:13:06.845 fused_ordering(755) 00:13:06.845 fused_ordering(756) 00:13:06.845 fused_ordering(757) 00:13:06.845 fused_ordering(758) 00:13:06.845 fused_ordering(759) 00:13:06.845 fused_ordering(760) 00:13:06.845 fused_ordering(761) 00:13:06.845 fused_ordering(762) 00:13:06.845 fused_ordering(763) 00:13:06.845 fused_ordering(764) 00:13:06.845 fused_ordering(765) 00:13:06.845 fused_ordering(766) 00:13:06.845 fused_ordering(767) 00:13:06.845 fused_ordering(768) 00:13:06.845 fused_ordering(769) 00:13:06.845 fused_ordering(770) 00:13:06.845 fused_ordering(771) 00:13:06.845 fused_ordering(772) 00:13:06.845 fused_ordering(773) 00:13:06.845 fused_ordering(774) 00:13:06.845 fused_ordering(775) 00:13:06.845 fused_ordering(776) 00:13:06.845 fused_ordering(777) 00:13:06.845 fused_ordering(778) 00:13:06.845 fused_ordering(779) 00:13:06.845 fused_ordering(780) 00:13:06.845 fused_ordering(781) 00:13:06.845 fused_ordering(782) 00:13:06.845 fused_ordering(783) 00:13:06.845 fused_ordering(784) 00:13:06.845 fused_ordering(785) 00:13:06.845 fused_ordering(786) 00:13:06.845 fused_ordering(787) 00:13:06.845 fused_ordering(788) 00:13:06.845 fused_ordering(789) 00:13:06.845 fused_ordering(790) 00:13:06.845 fused_ordering(791) 00:13:06.845 fused_ordering(792) 00:13:06.845 fused_ordering(793) 00:13:06.845 fused_ordering(794) 00:13:06.845 fused_ordering(795) 00:13:06.845 fused_ordering(796) 00:13:06.845 fused_ordering(797) 00:13:06.845 fused_ordering(798) 00:13:06.845 fused_ordering(799) 00:13:06.845 fused_ordering(800) 00:13:06.845 fused_ordering(801) 00:13:06.845 fused_ordering(802) 00:13:06.845 fused_ordering(803) 00:13:06.845 fused_ordering(804) 00:13:06.845 fused_ordering(805) 00:13:06.845 fused_ordering(806) 00:13:06.845 fused_ordering(807) 00:13:06.845 fused_ordering(808) 00:13:06.845 fused_ordering(809) 00:13:06.845 fused_ordering(810) 00:13:06.845 fused_ordering(811) 00:13:06.845 fused_ordering(812) 00:13:06.845 fused_ordering(813) 00:13:06.845 fused_ordering(814) 00:13:06.845 fused_ordering(815) 00:13:06.845 fused_ordering(816) 00:13:06.845 fused_ordering(817) 00:13:06.845 fused_ordering(818) 00:13:06.845 fused_ordering(819) 00:13:06.845 fused_ordering(820) 00:13:07.411 fused_ordering(821) 00:13:07.411 fused_ordering(822) 00:13:07.411 fused_ordering(823) 00:13:07.411 fused_ordering(824) 00:13:07.411 fused_ordering(825) 00:13:07.411 fused_ordering(826) 00:13:07.411 fused_ordering(827) 00:13:07.411 fused_ordering(828) 00:13:07.411 fused_ordering(829) 00:13:07.411 fused_ordering(830) 00:13:07.411 fused_ordering(831) 00:13:07.411 fused_ordering(832) 00:13:07.411 fused_ordering(833) 00:13:07.411 fused_ordering(834) 00:13:07.411 fused_ordering(835) 00:13:07.411 fused_ordering(836) 00:13:07.411 fused_ordering(837) 00:13:07.411 fused_ordering(838) 00:13:07.411 fused_ordering(839) 00:13:07.411 fused_ordering(840) 00:13:07.411 fused_ordering(841) 00:13:07.411 fused_ordering(842) 00:13:07.411 fused_ordering(843) 00:13:07.411 fused_ordering(844) 00:13:07.411 fused_ordering(845) 00:13:07.411 fused_ordering(846) 00:13:07.411 fused_ordering(847) 00:13:07.411 fused_ordering(848) 00:13:07.411 fused_ordering(849) 00:13:07.411 fused_ordering(850) 00:13:07.411 fused_ordering(851) 00:13:07.411 fused_ordering(852) 00:13:07.411 fused_ordering(853) 00:13:07.411 fused_ordering(854) 00:13:07.411 fused_ordering(855) 00:13:07.411 fused_ordering(856) 00:13:07.411 fused_ordering(857) 00:13:07.411 fused_ordering(858) 00:13:07.411 fused_ordering(859) 00:13:07.411 fused_ordering(860) 00:13:07.411 fused_ordering(861) 00:13:07.411 fused_ordering(862) 00:13:07.411 fused_ordering(863) 00:13:07.411 fused_ordering(864) 00:13:07.411 fused_ordering(865) 00:13:07.411 fused_ordering(866) 00:13:07.411 fused_ordering(867) 00:13:07.411 fused_ordering(868) 00:13:07.411 fused_ordering(869) 00:13:07.411 fused_ordering(870) 00:13:07.411 fused_ordering(871) 00:13:07.411 fused_ordering(872) 00:13:07.411 fused_ordering(873) 00:13:07.411 fused_ordering(874) 00:13:07.411 fused_ordering(875) 00:13:07.411 fused_ordering(876) 00:13:07.411 fused_ordering(877) 00:13:07.411 fused_ordering(878) 00:13:07.411 fused_ordering(879) 00:13:07.411 fused_ordering(880) 00:13:07.411 fused_ordering(881) 00:13:07.411 fused_ordering(882) 00:13:07.411 fused_ordering(883) 00:13:07.411 fused_ordering(884) 00:13:07.411 fused_ordering(885) 00:13:07.411 fused_ordering(886) 00:13:07.411 fused_ordering(887) 00:13:07.411 fused_ordering(888) 00:13:07.411 fused_ordering(889) 00:13:07.411 fused_ordering(890) 00:13:07.411 fused_ordering(891) 00:13:07.411 fused_ordering(892) 00:13:07.411 fused_ordering(893) 00:13:07.411 fused_ordering(894) 00:13:07.411 fused_ordering(895) 00:13:07.411 fused_ordering(896) 00:13:07.411 fused_ordering(897) 00:13:07.411 fused_ordering(898) 00:13:07.411 fused_ordering(899) 00:13:07.411 fused_ordering(900) 00:13:07.411 fused_ordering(901) 00:13:07.411 fused_ordering(902) 00:13:07.411 fused_ordering(903) 00:13:07.411 fused_ordering(904) 00:13:07.411 fused_ordering(905) 00:13:07.411 fused_ordering(906) 00:13:07.411 fused_ordering(907) 00:13:07.411 fused_ordering(908) 00:13:07.411 fused_ordering(909) 00:13:07.411 fused_ordering(910) 00:13:07.411 fused_ordering(911) 00:13:07.411 fused_ordering(912) 00:13:07.411 fused_ordering(913) 00:13:07.411 fused_ordering(914) 00:13:07.411 fused_ordering(915) 00:13:07.411 fused_ordering(916) 00:13:07.411 fused_ordering(917) 00:13:07.411 fused_ordering(918) 00:13:07.411 fused_ordering(919) 00:13:07.411 fused_ordering(920) 00:13:07.411 fused_ordering(921) 00:13:07.411 fused_ordering(922) 00:13:07.411 fused_ordering(923) 00:13:07.411 fused_ordering(924) 00:13:07.411 fused_ordering(925) 00:13:07.411 fused_ordering(926) 00:13:07.411 fused_ordering(927) 00:13:07.411 fused_ordering(928) 00:13:07.411 fused_ordering(929) 00:13:07.411 fused_ordering(930) 00:13:07.411 fused_ordering(931) 00:13:07.411 fused_ordering(932) 00:13:07.411 fused_ordering(933) 00:13:07.411 fused_ordering(934) 00:13:07.411 fused_ordering(935) 00:13:07.411 fused_ordering(936) 00:13:07.411 fused_ordering(937) 00:13:07.411 fused_ordering(938) 00:13:07.411 fused_ordering(939) 00:13:07.411 fused_ordering(940) 00:13:07.411 fused_ordering(941) 00:13:07.411 fused_ordering(942) 00:13:07.411 fused_ordering(943) 00:13:07.411 fused_ordering(944) 00:13:07.411 fused_ordering(945) 00:13:07.411 fused_ordering(946) 00:13:07.411 fused_ordering(947) 00:13:07.411 fused_ordering(948) 00:13:07.411 fused_ordering(949) 00:13:07.411 fused_ordering(950) 00:13:07.411 fused_ordering(951) 00:13:07.411 fused_ordering(952) 00:13:07.411 fused_ordering(953) 00:13:07.411 fused_ordering(954) 00:13:07.411 fused_ordering(955) 00:13:07.411 fused_ordering(956) 00:13:07.411 fused_ordering(957) 00:13:07.411 fused_ordering(958) 00:13:07.411 fused_ordering(959) 00:13:07.411 fused_ordering(960) 00:13:07.411 fused_ordering(961) 00:13:07.411 fused_ordering(962) 00:13:07.411 fused_ordering(963) 00:13:07.411 fused_ordering(964) 00:13:07.411 fused_ordering(965) 00:13:07.411 fused_ordering(966) 00:13:07.411 fused_ordering(967) 00:13:07.411 fused_ordering(968) 00:13:07.411 fused_ordering(969) 00:13:07.411 fused_ordering(970) 00:13:07.411 fused_ordering(971) 00:13:07.411 fused_ordering(972) 00:13:07.411 fused_ordering(973) 00:13:07.411 fused_ordering(974) 00:13:07.411 fused_ordering(975) 00:13:07.411 fused_ordering(976) 00:13:07.411 fused_ordering(977) 00:13:07.411 fused_ordering(978) 00:13:07.411 fused_ordering(979) 00:13:07.411 fused_ordering(980) 00:13:07.411 fused_ordering(981) 00:13:07.411 fused_ordering(982) 00:13:07.411 fused_ordering(983) 00:13:07.411 fused_ordering(984) 00:13:07.411 fused_ordering(985) 00:13:07.411 fused_ordering(986) 00:13:07.411 fused_ordering(987) 00:13:07.411 fused_ordering(988) 00:13:07.411 fused_ordering(989) 00:13:07.411 fused_ordering(990) 00:13:07.411 fused_ordering(991) 00:13:07.411 fused_ordering(992) 00:13:07.411 fused_ordering(993) 00:13:07.411 fused_ordering(994) 00:13:07.411 fused_ordering(995) 00:13:07.411 fused_ordering(996) 00:13:07.411 fused_ordering(997) 00:13:07.411 fused_ordering(998) 00:13:07.411 fused_ordering(999) 00:13:07.411 fused_ordering(1000) 00:13:07.411 fused_ordering(1001) 00:13:07.411 fused_ordering(1002) 00:13:07.411 fused_ordering(1003) 00:13:07.411 fused_ordering(1004) 00:13:07.411 fused_ordering(1005) 00:13:07.411 fused_ordering(1006) 00:13:07.411 fused_ordering(1007) 00:13:07.411 fused_ordering(1008) 00:13:07.411 fused_ordering(1009) 00:13:07.411 fused_ordering(1010) 00:13:07.411 fused_ordering(1011) 00:13:07.411 fused_ordering(1012) 00:13:07.411 fused_ordering(1013) 00:13:07.411 fused_ordering(1014) 00:13:07.411 fused_ordering(1015) 00:13:07.411 fused_ordering(1016) 00:13:07.411 fused_ordering(1017) 00:13:07.411 fused_ordering(1018) 00:13:07.411 fused_ordering(1019) 00:13:07.411 fused_ordering(1020) 00:13:07.411 fused_ordering(1021) 00:13:07.411 fused_ordering(1022) 00:13:07.411 fused_ordering(1023) 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.411 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.411 rmmod nvme_tcp 00:13:07.669 rmmod nvme_fabrics 00:13:07.669 rmmod nvme_keyring 00:13:07.669 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 87346 ']' 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 87346 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 87346 ']' 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 87346 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87346 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:07.670 killing process with pid 87346 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87346' 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 87346 00:13:07.670 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 87346 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:07.929 00:13:07.929 real 0m3.987s 00:13:07.929 user 0m4.550s 00:13:07.929 sys 0m1.502s 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.929 06:59:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:07.929 ************************************ 00:13:07.929 END TEST nvmf_fused_ordering 00:13:07.929 ************************************ 00:13:07.929 06:59:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:07.929 06:59:15 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:07.929 06:59:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.929 06:59:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.929 06:59:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:07.929 ************************************ 00:13:07.929 START TEST nvmf_delete_subsystem 00:13:07.929 ************************************ 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:07.929 * Looking for test storage... 00:13:07.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:07.929 06:59:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:08.187 Cannot find device "nvmf_tgt_br" 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.187 Cannot find device "nvmf_tgt_br2" 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:08.187 Cannot find device "nvmf_tgt_br" 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:08.187 Cannot find device "nvmf_tgt_br2" 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:08.187 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:08.188 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:08.188 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:08.188 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:08.188 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:08.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:13:08.445 00:13:08.445 --- 10.0.0.2 ping statistics --- 00:13:08.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.445 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:08.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:08.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:13:08.445 00:13:08.445 --- 10.0.0.3 ping statistics --- 00:13:08.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.445 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:08.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:08.445 00:13:08.445 --- 10.0.0.1 ping statistics --- 00:13:08.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.445 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=87607 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 87607 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 87607 ']' 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.445 06:59:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:08.445 [2024-07-13 06:59:16.394196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:08.445 [2024-07-13 06:59:16.394303] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.703 [2024-07-13 06:59:16.530404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.703 [2024-07-13 06:59:16.649609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.703 [2024-07-13 06:59:16.649672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.703 [2024-07-13 06:59:16.649683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.703 [2024-07-13 06:59:16.649691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.703 [2024-07-13 06:59:16.649698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.703 [2024-07-13 06:59:16.650405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.703 [2024-07-13 06:59:16.650455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 [2024-07-13 06:59:17.400541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 [2024-07-13 06:59:17.416814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 NULL1 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 Delay0 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=87658 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:09.637 06:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:09.637 [2024-07-13 06:59:17.611400] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:11.537 06:59:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.537 06:59:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.537 06:59:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 [2024-07-13 06:59:19.654804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7535c0 is same with the state(5) to be set 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 starting I/O failed: -6 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 [2024-07-13 06:59:19.656161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa17800d430 is same with the state(5) to be set 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Read completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.796 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 [2024-07-13 06:59:19.656708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x752fa0 is same with the state(5) to be set 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:11.797 Write completed with error (sct=0, sc=8) 00:13:11.797 Read completed with error (sct=0, sc=8) 00:13:12.732 [2024-07-13 06:59:20.626162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x751ae0 is same with the state(5) to be set 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 [2024-07-13 06:59:20.652148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa17800cfe0 is same with the state(5) to be set 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 [2024-07-13 06:59:20.652721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa17800d740 is same with the state(5) to be set 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 [2024-07-13 06:59:20.653932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7532b0 is same with the state(5) to be set 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Write completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 Read completed with error (sct=0, sc=8) 00:13:12.733 [2024-07-13 06:59:20.656510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x752dc0 is same with the state(5) to be set 00:13:12.733 Initializing NVMe Controllers 00:13:12.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.733 Controller IO queue size 128, less than required. 00:13:12.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:12.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:12.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:12.733 Initialization complete. Launching workers. 00:13:12.733 ======================================================== 00:13:12.733 Latency(us) 00:13:12.733 Device Information : IOPS MiB/s Average min max 00:13:12.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.20 0.08 934868.02 1732.82 2005114.83 00:13:12.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.84 0.08 1036666.28 1065.20 2002414.52 00:13:12.733 ======================================================== 00:13:12.733 Total : 325.03 0.16 983675.41 1065.20 2005114.83 00:13:12.733 00:13:12.733 [2024-07-13 06:59:20.657313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x751ae0 (9): Bad file descriptor 00:13:12.733 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:12.733 06:59:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.733 06:59:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:12.733 06:59:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87658 00:13:12.733 06:59:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:13.299 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:13.299 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87658 00:13:13.299 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (87658) - No such process 00:13:13.299 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 87658 00:13:13.299 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:13.299 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 87658 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 87658 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:13.300 [2024-07-13 06:59:21.181799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=87704 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:13.300 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:13.300 [2024-07-13 06:59:21.359385] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:13.865 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:13.865 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:13.865 06:59:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:14.432 06:59:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:14.432 06:59:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:14.432 06:59:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:14.690 06:59:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:14.690 06:59:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:14.690 06:59:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:15.253 06:59:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:15.253 06:59:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:15.253 06:59:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:15.819 06:59:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:15.819 06:59:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:15.819 06:59:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:16.385 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:16.385 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:16.385 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:16.385 Initializing NVMe Controllers 00:13:16.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.385 Controller IO queue size 128, less than required. 00:13:16.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:16.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:16.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:16.385 Initialization complete. Launching workers. 00:13:16.385 ======================================================== 00:13:16.385 Latency(us) 00:13:16.385 Device Information : IOPS MiB/s Average min max 00:13:16.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004716.41 1000191.05 1015977.86 00:13:16.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1008947.63 1000182.47 1042462.81 00:13:16.385 ======================================================== 00:13:16.385 Total : 256.00 0.12 1006832.02 1000182.47 1042462.81 00:13:16.385 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87704 00:13:16.951 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87704) - No such process 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 87704 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.951 rmmod nvme_tcp 00:13:16.951 rmmod nvme_fabrics 00:13:16.951 rmmod nvme_keyring 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 87607 ']' 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 87607 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 87607 ']' 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 87607 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87607 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:16.951 killing process with pid 87607 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87607' 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 87607 00:13:16.951 06:59:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 87607 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:17.209 ************************************ 00:13:17.209 00:13:17.209 real 0m9.334s 00:13:17.209 user 0m29.152s 00:13:17.209 sys 0m1.194s 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.209 06:59:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:17.209 END TEST nvmf_delete_subsystem 00:13:17.209 ************************************ 00:13:17.209 06:59:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:17.209 06:59:25 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:17.209 06:59:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.209 06:59:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.209 06:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:17.209 ************************************ 00:13:17.209 START TEST nvmf_ns_masking 00:13:17.209 ************************************ 00:13:17.209 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:17.468 * Looking for test storage... 00:13:17.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9eb96d38-1395-43de-824e-b6c99e9994ad 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b171d9d8-594d-445e-bba6-b8ae645ebc00 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f35b60c5-3603-40b6-82b5-5403306b7c40 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.468 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:17.469 Cannot find device "nvmf_tgt_br" 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.469 Cannot find device "nvmf_tgt_br2" 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:17.469 Cannot find device "nvmf_tgt_br" 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:17.469 Cannot find device "nvmf_tgt_br2" 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:17.469 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:17.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:13:17.728 00:13:17.728 --- 10.0.0.2 ping statistics --- 00:13:17.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.728 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:17.728 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:17.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:17.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:17.729 00:13:17.729 --- 10.0.0.3 ping statistics --- 00:13:17.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.729 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:17.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:17.729 00:13:17.729 --- 10.0.0.1 ping statistics --- 00:13:17.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.729 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=87945 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 87945 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 87945 ']' 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.729 06:59:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:17.729 [2024-07-13 06:59:25.800534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:17.729 [2024-07-13 06:59:25.800671] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.988 [2024-07-13 06:59:25.939144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.988 [2024-07-13 06:59:26.052842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.988 [2024-07-13 06:59:26.052916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.988 [2024-07-13 06:59:26.052926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.988 [2024-07-13 06:59:26.052942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.988 [2024-07-13 06:59:26.052950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.988 [2024-07-13 06:59:26.052983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.929 06:59:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:19.187 [2024-07-13 06:59:27.026235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.187 06:59:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:19.187 06:59:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:19.187 06:59:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:19.445 Malloc1 00:13:19.445 06:59:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:19.702 Malloc2 00:13:19.702 06:59:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.958 06:59:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:19.958 06:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.215 [2024-07-13 06:59:28.260693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.215 06:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:20.215 06:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f35b60c5-3603-40b6-82b5-5403306b7c40 -a 10.0.0.2 -s 4420 -i 4 00:13:20.473 06:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.473 06:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.473 06:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.473 06:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:20.473 06:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:22.374 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:22.632 [ 0]:0x1 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32cce96bced945cb9ae41d9438950547 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32cce96bced945cb9ae41d9438950547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.632 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:22.891 [ 0]:0x1 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32cce96bced945cb9ae41d9438950547 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32cce96bced945cb9ae41d9438950547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:22.891 [ 1]:0x2 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:22.891 06:59:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.149 06:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.407 06:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f35b60c5-3603-40b6-82b5-5403306b7c40 -a 10.0.0.2 -s 4420 -i 4 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:23.665 06:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:23.666 06:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:25.565 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:25.824 [ 0]:0x2 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.824 06:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:26.082 [ 0]:0x1 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32cce96bced945cb9ae41d9438950547 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32cce96bced945cb9ae41d9438950547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:26.082 [ 1]:0x2 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:26.082 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.341 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:26.341 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.341 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:26.599 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:26.599 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:26.599 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:26.599 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:26.600 [ 0]:0x2 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.600 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:26.858 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:26.858 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f35b60c5-3603-40b6-82b5-5403306b7c40 -a 10.0.0.2 -s 4420 -i 4 00:13:27.116 06:59:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:27.116 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.116 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.116 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:27.116 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:27.116 06:59:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:29.017 06:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:29.017 06:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:29.017 06:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:29.017 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:29.018 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:29.018 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:29.018 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:29.018 [ 0]:0x1 00:13:29.018 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:29.018 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=32cce96bced945cb9ae41d9438950547 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 32cce96bced945cb9ae41d9438950547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:29.276 [ 1]:0x2 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.276 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:29.534 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:29.534 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:29.535 [ 0]:0x2 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:29.535 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:29.793 [2024-07-13 06:59:37.857194] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:29.793 2024/07/13 06:59:37 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:13:29.793 request: 00:13:29.793 { 00:13:29.793 "method": "nvmf_ns_remove_host", 00:13:29.793 "params": { 00:13:29.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.793 "nsid": 2, 00:13:29.793 "host": "nqn.2016-06.io.spdk:host1" 00:13:29.793 } 00:13:29.793 } 00:13:29.793 Got JSON-RPC error response 00:13:29.793 GoRPCClient: error on JSON-RPC call 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:30.052 [ 0]:0x2 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cd91479696e54ac9b8892106051392ad 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cd91479696e54ac9b8892106051392ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:30.052 06:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=88322 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 88322 /var/tmp/host.sock 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 88322 ']' 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.052 06:59:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:30.052 [2024-07-13 06:59:38.100223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:30.052 [2024-07-13 06:59:38.100302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88322 ] 00:13:30.310 [2024-07-13 06:59:38.240637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.310 [2024-07-13 06:59:38.342153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.245 06:59:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.245 06:59:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:31.245 06:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.245 06:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.809 06:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9eb96d38-1395-43de-824e-b6c99e9994ad 00:13:31.809 06:59:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:31.809 06:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EB96D38139543DE824EB6C99E9994AD -i 00:13:31.809 06:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b171d9d8-594d-445e-bba6-b8ae645ebc00 00:13:31.809 06:59:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:31.809 06:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B171D9D8594D445EBBA6B8AE645EBC00 -i 00:13:32.066 06:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:32.324 06:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:32.583 06:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:32.583 06:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:32.841 nvme0n1 00:13:32.841 06:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:32.841 06:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:33.099 nvme1n2 00:13:33.357 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:33.357 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:33.358 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:33.358 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:33.358 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9eb96d38-1395-43de-824e-b6c99e9994ad == \9\e\b\9\6\d\3\8\-\1\3\9\5\-\4\3\d\e\-\8\2\4\e\-\b\6\c\9\9\e\9\9\9\4\a\d ]] 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:33.616 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b171d9d8-594d-445e-bba6-b8ae645ebc00 == \b\1\7\1\d\9\d\8\-\5\9\4\d\-\4\4\5\e\-\b\b\a\6\-\b\8\a\e\6\4\5\e\b\c\0\0 ]] 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 88322 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 88322 ']' 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 88322 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88322 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:33.874 killing process with pid 88322 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88322' 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 88322 00:13:33.874 06:59:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 88322 00:13:34.440 06:59:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.699 rmmod nvme_tcp 00:13:34.699 rmmod nvme_fabrics 00:13:34.699 rmmod nvme_keyring 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 87945 ']' 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 87945 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 87945 ']' 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 87945 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87945 00:13:34.699 killing process with pid 87945 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87945' 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 87945 00:13:34.699 06:59:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 87945 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.965 06:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.223 06:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:35.224 00:13:35.224 real 0m17.835s 00:13:35.224 user 0m27.715s 00:13:35.224 sys 0m2.925s 00:13:35.224 06:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.224 06:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.224 ************************************ 00:13:35.224 END TEST nvmf_ns_masking 00:13:35.224 ************************************ 00:13:35.224 06:59:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:35.224 06:59:43 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:13:35.224 06:59:43 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:35.224 06:59:43 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:35.224 06:59:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.224 06:59:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.224 06:59:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.224 ************************************ 00:13:35.224 START TEST nvmf_host_management 00:13:35.224 ************************************ 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:35.224 * Looking for test storage... 00:13:35.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:35.224 Cannot find device "nvmf_tgt_br" 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.224 Cannot find device "nvmf_tgt_br2" 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:35.224 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:35.224 Cannot find device "nvmf_tgt_br" 00:13:35.481 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:35.482 Cannot find device "nvmf_tgt_br2" 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:35.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:35.482 00:13:35.482 --- 10.0.0.2 ping statistics --- 00:13:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.482 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:35.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:35.482 00:13:35.482 --- 10.0.0.3 ping statistics --- 00:13:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.482 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:35.482 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:35.740 00:13:35.740 --- 10.0.0.1 ping statistics --- 00:13:35.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.740 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=88685 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 88685 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 88685 ']' 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.740 06:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:35.740 [2024-07-13 06:59:43.651442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:35.740 [2024-07-13 06:59:43.651524] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.740 [2024-07-13 06:59:43.794433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.998 [2024-07-13 06:59:43.892048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.998 [2024-07-13 06:59:43.892102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.998 [2024-07-13 06:59:43.892116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.998 [2024-07-13 06:59:43.892126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.998 [2024-07-13 06:59:43.892136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.998 [2024-07-13 06:59:43.892489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.998 [2024-07-13 06:59:43.892858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.998 [2024-07-13 06:59:43.893012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:35.998 [2024-07-13 06:59:43.893112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:36.939 [2024-07-13 06:59:44.690804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:36.939 Malloc0 00:13:36.939 [2024-07-13 06:59:44.764972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=88757 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 88757 /var/tmp/bdevperf.sock 00:13:36.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 88757 ']' 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:36.939 { 00:13:36.939 "params": { 00:13:36.939 "name": "Nvme$subsystem", 00:13:36.939 "trtype": "$TEST_TRANSPORT", 00:13:36.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:36.939 "adrfam": "ipv4", 00:13:36.939 "trsvcid": "$NVMF_PORT", 00:13:36.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:36.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:36.939 "hdgst": ${hdgst:-false}, 00:13:36.939 "ddgst": ${ddgst:-false} 00:13:36.939 }, 00:13:36.939 "method": "bdev_nvme_attach_controller" 00:13:36.939 } 00:13:36.939 EOF 00:13:36.939 )") 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:36.939 06:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:36.939 "params": { 00:13:36.939 "name": "Nvme0", 00:13:36.939 "trtype": "tcp", 00:13:36.939 "traddr": "10.0.0.2", 00:13:36.939 "adrfam": "ipv4", 00:13:36.939 "trsvcid": "4420", 00:13:36.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:36.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:36.939 "hdgst": false, 00:13:36.939 "ddgst": false 00:13:36.939 }, 00:13:36.939 "method": "bdev_nvme_attach_controller" 00:13:36.939 }' 00:13:36.939 [2024-07-13 06:59:44.873705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:36.939 [2024-07-13 06:59:44.873788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88757 ] 00:13:37.197 [2024-07-13 06:59:45.016803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.197 [2024-07-13 06:59:45.145292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.455 Running I/O for 10 seconds... 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.024 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.024 [2024-07-13 06:59:45.907007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.024 [2024-07-13 06:59:45.907081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.024 [2024-07-13 06:59:45.907095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.024 [2024-07-13 06:59:45.907105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.024 [2024-07-13 06:59:45.907115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.024 [2024-07-13 06:59:45.907124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.024 [2024-07-13 06:59:45.907135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.024 [2024-07-13 06:59:45.907143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.024 [2024-07-13 06:59:45.907153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1cb0 is same with the state(5) to be set 00:13:38.024 [2024-07-13 06:59:45.910444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.024 [2024-07-13 06:59:45.910483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.024 [2024-07-13 06:59:45.910513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.024 [2024-07-13 06:59:45.910524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.910981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.910992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:1 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.025 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:38.025 [2024-07-13 06:59:45.911276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.025 [2024-07-13 06:59:45.911400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.025 [2024-07-13 06:59:45.911410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.026 [2024-07-13 06:59:45.911420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.026 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:38.026 [2024-07-13 06:59:45.911808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.026 [2024-07-13 06:59:45.911818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d423f0 is same with the state(5) to be set 00:13:38.026 [2024-07-13 06:59:45.911899] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d423f0 was disconnected and freed. reset controller. 00:13:38.026 [2024-07-13 06:59:45.913009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:38.026 task offset: 114560 on job bdev=Nvme0n1 fails 00:13:38.026 00:13:38.026 Latency(us) 00:13:38.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.026 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:38.026 Job: Nvme0n1 ended in about 0.55 seconds with error 00:13:38.026 Verification LBA range: start 0x0 length 0x400 00:13:38.026 Nvme0n1 : 0.55 1502.46 93.90 115.57 0.00 38503.16 5898.24 35508.60 00:13:38.026 =================================================================================================================== 00:13:38.026 Total : 1502.46 93.90 115.57 0.00 38503.16 5898.24 35508.60 00:13:38.026 [2024-07-13 06:59:45.914801] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.026 [2024-07-13 06:59:45.914827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1cb0 (9): Bad file descriptor 00:13:38.026 06:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.026 06:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:38.026 [2024-07-13 06:59:45.927785] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 88757 00:13:38.961 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (88757) - No such process 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:38.961 { 00:13:38.961 "params": { 00:13:38.961 "name": "Nvme$subsystem", 00:13:38.961 "trtype": "$TEST_TRANSPORT", 00:13:38.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.961 "adrfam": "ipv4", 00:13:38.961 "trsvcid": "$NVMF_PORT", 00:13:38.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.961 "hdgst": ${hdgst:-false}, 00:13:38.961 "ddgst": ${ddgst:-false} 00:13:38.961 }, 00:13:38.961 "method": "bdev_nvme_attach_controller" 00:13:38.961 } 00:13:38.961 EOF 00:13:38.961 )") 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:38.961 06:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:38.961 "params": { 00:13:38.961 "name": "Nvme0", 00:13:38.961 "trtype": "tcp", 00:13:38.961 "traddr": "10.0.0.2", 00:13:38.961 "adrfam": "ipv4", 00:13:38.961 "trsvcid": "4420", 00:13:38.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:38.961 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:38.961 "hdgst": false, 00:13:38.961 "ddgst": false 00:13:38.961 }, 00:13:38.961 "method": "bdev_nvme_attach_controller" 00:13:38.961 }' 00:13:38.961 [2024-07-13 06:59:46.975305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:38.961 [2024-07-13 06:59:46.975406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88807 ] 00:13:39.220 [2024-07-13 06:59:47.110423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.220 [2024-07-13 06:59:47.229056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.479 Running I/O for 1 seconds... 00:13:40.414 00:13:40.414 Latency(us) 00:13:40.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:40.414 Verification LBA range: start 0x0 length 0x400 00:13:40.414 Nvme0n1 : 1.03 1611.61 100.73 0.00 0.00 38978.37 6106.76 36461.85 00:13:40.414 =================================================================================================================== 00:13:40.414 Total : 1611.61 100.73 0.00 0.00 38978.37 6106.76 36461.85 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.981 rmmod nvme_tcp 00:13:40.981 rmmod nvme_fabrics 00:13:40.981 rmmod nvme_keyring 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 88685 ']' 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 88685 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 88685 ']' 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 88685 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88685 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88685' 00:13:40.981 killing process with pid 88685 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 88685 00:13:40.981 06:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 88685 00:13:41.240 [2024-07-13 06:59:49.259594] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.240 06:59:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.499 06:59:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:41.499 06:59:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:41.499 00:13:41.499 real 0m6.201s 00:13:41.499 user 0m24.206s 00:13:41.499 sys 0m1.522s 00:13:41.499 06:59:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.499 ************************************ 00:13:41.499 END TEST nvmf_host_management 00:13:41.499 ************************************ 00:13:41.499 06:59:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.499 06:59:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.499 06:59:49 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:41.499 06:59:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.499 06:59:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.499 06:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.499 ************************************ 00:13:41.499 START TEST nvmf_lvol 00:13:41.499 ************************************ 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:41.499 * Looking for test storage... 00:13:41.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:41.499 Cannot find device "nvmf_tgt_br" 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:13:41.499 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.499 Cannot find device "nvmf_tgt_br2" 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:41.500 Cannot find device "nvmf_tgt_br" 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:41.500 Cannot find device "nvmf_tgt_br2" 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:13:41.500 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:41.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:41.758 00:13:41.758 --- 10.0.0.2 ping statistics --- 00:13:41.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.758 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:41.758 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.758 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:41.758 00:13:41.758 --- 10.0.0.3 ping statistics --- 00:13:41.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.758 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:41.758 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:42.017 00:13:42.017 --- 10.0.0.1 ping statistics --- 00:13:42.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.017 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=89021 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 89021 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 89021 ']' 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.017 06:59:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:42.017 [2024-07-13 06:59:49.923848] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:42.017 [2024-07-13 06:59:49.924176] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.017 [2024-07-13 06:59:50.067951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.275 [2024-07-13 06:59:50.190434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.275 [2024-07-13 06:59:50.190797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.275 [2024-07-13 06:59:50.190889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.275 [2024-07-13 06:59:50.190994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.275 [2024-07-13 06:59:50.191093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.275 [2024-07-13 06:59:50.191376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.275 [2024-07-13 06:59:50.191445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.275 [2024-07-13 06:59:50.191446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.209 06:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:43.209 [2024-07-13 06:59:51.167208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.209 06:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.466 06:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:43.466 06:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:44.033 06:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:44.033 06:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:44.033 06:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:44.291 06:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ce40e479-d911-4b82-84f3-0dc9007255be 00:13:44.291 06:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ce40e479-d911-4b82-84f3-0dc9007255be lvol 20 00:13:44.549 06:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3610b4d0-7a40-487d-89ff-ac5fd9a4f5d4 00:13:44.549 06:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.807 06:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3610b4d0-7a40-487d-89ff-ac5fd9a4f5d4 00:13:45.064 06:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:45.322 [2024-07-13 06:59:53.263148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.322 06:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.581 06:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:45.581 06:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=89169 00:13:45.581 06:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:46.516 06:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3610b4d0-7a40-487d-89ff-ac5fd9a4f5d4 MY_SNAPSHOT 00:13:47.083 06:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=02a07997-7832-4cca-be80-c0fd1161c673 00:13:47.084 06:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3610b4d0-7a40-487d-89ff-ac5fd9a4f5d4 30 00:13:47.342 06:59:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 02a07997-7832-4cca-be80-c0fd1161c673 MY_CLONE 00:13:47.601 06:59:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f7b793e1-5c21-4e08-a2fa-bd8efe754ee7 00:13:47.601 06:59:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f7b793e1-5c21-4e08-a2fa-bd8efe754ee7 00:13:48.536 06:59:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 89169 00:13:56.698 Initializing NVMe Controllers 00:13:56.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:56.698 Controller IO queue size 128, less than required. 00:13:56.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:56.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:56.698 Initialization complete. Launching workers. 00:13:56.698 ======================================================== 00:13:56.698 Latency(us) 00:13:56.698 Device Information : IOPS MiB/s Average min max 00:13:56.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7042.30 27.51 18186.37 2535.03 131341.11 00:13:56.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7676.40 29.99 16694.36 3179.48 93362.64 00:13:56.698 ======================================================== 00:13:56.698 Total : 14718.70 57.49 17408.23 2535.03 131341.11 00:13:56.698 00:13:56.698 07:00:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3610b4d0-7a40-487d-89ff-ac5fd9a4f5d4 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce40e479-d911-4b82-84f3-0dc9007255be 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.698 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.698 rmmod nvme_tcp 00:13:56.698 rmmod nvme_fabrics 00:13:56.956 rmmod nvme_keyring 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 89021 ']' 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 89021 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 89021 ']' 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 89021 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89021 00:13:56.956 killing process with pid 89021 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89021' 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 89021 00:13:56.956 07:00:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 89021 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:57.214 ************************************ 00:13:57.214 END TEST nvmf_lvol 00:13:57.214 ************************************ 00:13:57.214 00:13:57.214 real 0m15.800s 00:13:57.214 user 1m5.939s 00:13:57.214 sys 0m3.825s 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 07:00:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:57.214 07:00:05 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:57.214 07:00:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:57.214 07:00:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.214 07:00:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.214 ************************************ 00:13:57.214 START TEST nvmf_lvs_grow 00:13:57.214 ************************************ 00:13:57.214 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:57.471 * Looking for test storage... 00:13:57.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.471 07:00:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:57.472 Cannot find device "nvmf_tgt_br" 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.472 Cannot find device "nvmf_tgt_br2" 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:57.472 Cannot find device "nvmf_tgt_br" 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:57.472 Cannot find device "nvmf_tgt_br2" 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:57.472 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:57.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:13:57.730 00:13:57.730 --- 10.0.0.2 ping statistics --- 00:13:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.730 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:57.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:57.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:13:57.730 00:13:57.730 --- 10.0.0.3 ping statistics --- 00:13:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.730 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:57.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:57.730 00:13:57.730 --- 10.0.0.1 ping statistics --- 00:13:57.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.730 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=89535 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 89535 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 89535 ']' 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.730 07:00:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:57.730 [2024-07-13 07:00:05.781499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:57.730 [2024-07-13 07:00:05.781635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.988 [2024-07-13 07:00:05.923220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.988 [2024-07-13 07:00:06.032288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.988 [2024-07-13 07:00:06.032361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.988 [2024-07-13 07:00:06.032389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.988 [2024-07-13 07:00:06.032397] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.988 [2024-07-13 07:00:06.032404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.988 [2024-07-13 07:00:06.032431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.922 07:00:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:59.179 [2024-07-13 07:00:07.026928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:59.179 ************************************ 00:13:59.179 START TEST lvs_grow_clean 00:13:59.179 ************************************ 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:59.179 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:59.180 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:59.180 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:59.180 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:59.437 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:59.437 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:59.694 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=79819157-19f9-4c81-ba9b-9747410b7c75 00:13:59.694 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:13:59.694 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:59.953 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:59.953 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:59.953 07:00:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79819157-19f9-4c81-ba9b-9747410b7c75 lvol 150 00:14:00.213 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=408ed646-45c9-4463-b5bb-a2fd0a7dbbfe 00:14:00.213 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:00.213 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:00.213 [2024-07-13 07:00:08.261132] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:00.213 [2024-07-13 07:00:08.261233] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:00.213 true 00:14:00.213 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:00.213 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:00.472 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:00.473 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.730 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 408ed646-45c9-4463-b5bb-a2fd0a7dbbfe 00:14:00.989 07:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:01.248 [2024-07-13 07:00:09.169649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.248 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89692 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89692 /var/tmp/bdevperf.sock 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 89692 ']' 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.507 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:01.507 [2024-07-13 07:00:09.454180] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:01.507 [2024-07-13 07:00:09.454271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89692 ] 00:14:01.765 [2024-07-13 07:00:09.583895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.765 [2024-07-13 07:00:09.674739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.765 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.765 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:01.765 07:00:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:02.023 Nvme0n1 00:14:02.023 07:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:02.282 [ 00:14:02.282 { 00:14:02.282 "aliases": [ 00:14:02.282 "408ed646-45c9-4463-b5bb-a2fd0a7dbbfe" 00:14:02.282 ], 00:14:02.282 "assigned_rate_limits": { 00:14:02.282 "r_mbytes_per_sec": 0, 00:14:02.282 "rw_ios_per_sec": 0, 00:14:02.282 "rw_mbytes_per_sec": 0, 00:14:02.282 "w_mbytes_per_sec": 0 00:14:02.282 }, 00:14:02.282 "block_size": 4096, 00:14:02.282 "claimed": false, 00:14:02.282 "driver_specific": { 00:14:02.282 "mp_policy": "active_passive", 00:14:02.282 "nvme": [ 00:14:02.282 { 00:14:02.282 "ctrlr_data": { 00:14:02.282 "ana_reporting": false, 00:14:02.282 "cntlid": 1, 00:14:02.282 "firmware_revision": "24.09", 00:14:02.282 "model_number": "SPDK bdev Controller", 00:14:02.282 "multi_ctrlr": true, 00:14:02.282 "oacs": { 00:14:02.282 "firmware": 0, 00:14:02.282 "format": 0, 00:14:02.282 "ns_manage": 0, 00:14:02.282 "security": 0 00:14:02.282 }, 00:14:02.282 "serial_number": "SPDK0", 00:14:02.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.282 "vendor_id": "0x8086" 00:14:02.282 }, 00:14:02.282 "ns_data": { 00:14:02.282 "can_share": true, 00:14:02.282 "id": 1 00:14:02.282 }, 00:14:02.282 "trid": { 00:14:02.282 "adrfam": "IPv4", 00:14:02.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.282 "traddr": "10.0.0.2", 00:14:02.282 "trsvcid": "4420", 00:14:02.282 "trtype": "TCP" 00:14:02.282 }, 00:14:02.282 "vs": { 00:14:02.282 "nvme_version": "1.3" 00:14:02.282 } 00:14:02.282 } 00:14:02.282 ] 00:14:02.282 }, 00:14:02.282 "memory_domains": [ 00:14:02.282 { 00:14:02.282 "dma_device_id": "system", 00:14:02.282 "dma_device_type": 1 00:14:02.282 } 00:14:02.282 ], 00:14:02.282 "name": "Nvme0n1", 00:14:02.282 "num_blocks": 38912, 00:14:02.282 "product_name": "NVMe disk", 00:14:02.282 "supported_io_types": { 00:14:02.282 "abort": true, 00:14:02.282 "compare": true, 00:14:02.282 "compare_and_write": true, 00:14:02.282 "copy": true, 00:14:02.282 "flush": true, 00:14:02.282 "get_zone_info": false, 00:14:02.282 "nvme_admin": true, 00:14:02.282 "nvme_io": true, 00:14:02.282 "nvme_io_md": false, 00:14:02.282 "nvme_iov_md": false, 00:14:02.282 "read": true, 00:14:02.282 "reset": true, 00:14:02.282 "seek_data": false, 00:14:02.282 "seek_hole": false, 00:14:02.282 "unmap": true, 00:14:02.282 "write": true, 00:14:02.282 "write_zeroes": true, 00:14:02.282 "zcopy": false, 00:14:02.282 "zone_append": false, 00:14:02.282 "zone_management": false 00:14:02.282 }, 00:14:02.282 "uuid": "408ed646-45c9-4463-b5bb-a2fd0a7dbbfe", 00:14:02.282 "zoned": false 00:14:02.282 } 00:14:02.282 ] 00:14:02.282 07:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.282 07:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89725 00:14:02.282 07:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:02.543 Running I/O for 10 seconds... 00:14:03.482 Latency(us) 00:14:03.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.482 Nvme0n1 : 1.00 9926.00 38.77 0.00 0.00 0.00 0.00 0.00 00:14:03.482 =================================================================================================================== 00:14:03.482 Total : 9926.00 38.77 0.00 0.00 0.00 0.00 0.00 00:14:03.482 00:14:04.417 07:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:04.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.418 Nvme0n1 : 2.00 9928.50 38.78 0.00 0.00 0.00 0.00 0.00 00:14:04.418 =================================================================================================================== 00:14:04.418 Total : 9928.50 38.78 0.00 0.00 0.00 0.00 0.00 00:14:04.418 00:14:04.676 true 00:14:04.676 07:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:04.676 07:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:04.934 07:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:04.934 07:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:04.934 07:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 89725 00:14:05.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.498 Nvme0n1 : 3.00 9906.00 38.70 0.00 0.00 0.00 0.00 0.00 00:14:05.498 =================================================================================================================== 00:14:05.498 Total : 9906.00 38.70 0.00 0.00 0.00 0.00 0.00 00:14:05.498 00:14:06.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.430 Nvme0n1 : 4.00 9868.25 38.55 0.00 0.00 0.00 0.00 0.00 00:14:06.430 =================================================================================================================== 00:14:06.430 Total : 9868.25 38.55 0.00 0.00 0.00 0.00 0.00 00:14:06.430 00:14:07.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.365 Nvme0n1 : 5.00 9820.00 38.36 0.00 0.00 0.00 0.00 0.00 00:14:07.365 =================================================================================================================== 00:14:07.365 Total : 9820.00 38.36 0.00 0.00 0.00 0.00 0.00 00:14:07.365 00:14:08.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.738 Nvme0n1 : 6.00 9783.67 38.22 0.00 0.00 0.00 0.00 0.00 00:14:08.738 =================================================================================================================== 00:14:08.738 Total : 9783.67 38.22 0.00 0.00 0.00 0.00 0.00 00:14:08.738 00:14:09.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.673 Nvme0n1 : 7.00 9696.00 37.88 0.00 0.00 0.00 0.00 0.00 00:14:09.673 =================================================================================================================== 00:14:09.673 Total : 9696.00 37.88 0.00 0.00 0.00 0.00 0.00 00:14:09.673 00:14:10.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.609 Nvme0n1 : 8.00 9618.12 37.57 0.00 0.00 0.00 0.00 0.00 00:14:10.609 =================================================================================================================== 00:14:10.609 Total : 9618.12 37.57 0.00 0.00 0.00 0.00 0.00 00:14:10.609 00:14:11.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.562 Nvme0n1 : 9.00 9565.78 37.37 0.00 0.00 0.00 0.00 0.00 00:14:11.562 =================================================================================================================== 00:14:11.562 Total : 9565.78 37.37 0.00 0.00 0.00 0.00 0.00 00:14:11.562 00:14:12.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.496 Nvme0n1 : 10.00 9502.10 37.12 0.00 0.00 0.00 0.00 0.00 00:14:12.496 =================================================================================================================== 00:14:12.496 Total : 9502.10 37.12 0.00 0.00 0.00 0.00 0.00 00:14:12.496 00:14:12.496 00:14:12.496 Latency(us) 00:14:12.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.496 Nvme0n1 : 10.01 9503.27 37.12 0.00 0.00 13459.88 6255.71 30146.56 00:14:12.496 =================================================================================================================== 00:14:12.496 Total : 9503.27 37.12 0.00 0.00 13459.88 6255.71 30146.56 00:14:12.496 0 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89692 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 89692 ']' 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 89692 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89692 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:12.496 killing process with pid 89692 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89692' 00:14:12.496 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.496 00:14:12.496 Latency(us) 00:14:12.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.496 =================================================================================================================== 00:14:12.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 89692 00:14:12.496 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 89692 00:14:12.755 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:13.013 07:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:13.270 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:13.270 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:13.528 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:13.528 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:13.528 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:13.786 [2024-07-13 07:00:21.634224] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:13.786 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:14.045 2024/07/13 07:00:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:79819157-19f9-4c81-ba9b-9747410b7c75], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:14.045 request: 00:14:14.045 { 00:14:14.045 "method": "bdev_lvol_get_lvstores", 00:14:14.045 "params": { 00:14:14.045 "uuid": "79819157-19f9-4c81-ba9b-9747410b7c75" 00:14:14.045 } 00:14:14.045 } 00:14:14.045 Got JSON-RPC error response 00:14:14.045 GoRPCClient: error on JSON-RPC call 00:14:14.045 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:14.045 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:14.045 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:14.045 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:14.045 07:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:14.045 aio_bdev 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 408ed646-45c9-4463-b5bb-a2fd0a7dbbfe 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=408ed646-45c9-4463-b5bb-a2fd0a7dbbfe 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:14.304 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:14.562 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 408ed646-45c9-4463-b5bb-a2fd0a7dbbfe -t 2000 00:14:14.562 [ 00:14:14.562 { 00:14:14.562 "aliases": [ 00:14:14.562 "lvs/lvol" 00:14:14.562 ], 00:14:14.562 "assigned_rate_limits": { 00:14:14.562 "r_mbytes_per_sec": 0, 00:14:14.562 "rw_ios_per_sec": 0, 00:14:14.562 "rw_mbytes_per_sec": 0, 00:14:14.562 "w_mbytes_per_sec": 0 00:14:14.562 }, 00:14:14.562 "block_size": 4096, 00:14:14.562 "claimed": false, 00:14:14.562 "driver_specific": { 00:14:14.562 "lvol": { 00:14:14.562 "base_bdev": "aio_bdev", 00:14:14.562 "clone": false, 00:14:14.562 "esnap_clone": false, 00:14:14.562 "lvol_store_uuid": "79819157-19f9-4c81-ba9b-9747410b7c75", 00:14:14.562 "num_allocated_clusters": 38, 00:14:14.562 "snapshot": false, 00:14:14.562 "thin_provision": false 00:14:14.562 } 00:14:14.562 }, 00:14:14.562 "name": "408ed646-45c9-4463-b5bb-a2fd0a7dbbfe", 00:14:14.562 "num_blocks": 38912, 00:14:14.562 "product_name": "Logical Volume", 00:14:14.562 "supported_io_types": { 00:14:14.562 "abort": false, 00:14:14.562 "compare": false, 00:14:14.562 "compare_and_write": false, 00:14:14.562 "copy": false, 00:14:14.562 "flush": false, 00:14:14.562 "get_zone_info": false, 00:14:14.562 "nvme_admin": false, 00:14:14.562 "nvme_io": false, 00:14:14.563 "nvme_io_md": false, 00:14:14.563 "nvme_iov_md": false, 00:14:14.563 "read": true, 00:14:14.563 "reset": true, 00:14:14.563 "seek_data": true, 00:14:14.563 "seek_hole": true, 00:14:14.563 "unmap": true, 00:14:14.563 "write": true, 00:14:14.563 "write_zeroes": true, 00:14:14.563 "zcopy": false, 00:14:14.563 "zone_append": false, 00:14:14.563 "zone_management": false 00:14:14.563 }, 00:14:14.563 "uuid": "408ed646-45c9-4463-b5bb-a2fd0a7dbbfe", 00:14:14.563 "zoned": false 00:14:14.563 } 00:14:14.563 ] 00:14:14.563 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:14.563 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:14.563 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:14.821 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:14.821 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:14.821 07:00:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:15.078 07:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:15.078 07:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 408ed646-45c9-4463-b5bb-a2fd0a7dbbfe 00:14:15.336 07:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79819157-19f9-4c81-ba9b-9747410b7c75 00:14:15.998 07:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:15.998 07:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.257 ************************************ 00:14:16.257 END TEST lvs_grow_clean 00:14:16.257 ************************************ 00:14:16.257 00:14:16.257 real 0m17.250s 00:14:16.257 user 0m16.123s 00:14:16.257 sys 0m2.236s 00:14:16.257 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.257 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.524 ************************************ 00:14:16.524 START TEST lvs_grow_dirty 00:14:16.524 ************************************ 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:16.524 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:16.525 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.525 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:16.525 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:16.788 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:16.788 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:17.046 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:17.046 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:17.046 07:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:17.310 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:17.310 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:17.310 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8c792aa3-4fe8-4920-8b03-503fcf13942b lvol 150 00:14:17.568 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:17.568 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:17.568 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:17.568 [2024-07-13 07:00:25.622533] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:17.568 [2024-07-13 07:00:25.622665] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:17.568 true 00:14:17.826 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:17.826 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:18.084 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:18.084 07:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.342 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:18.342 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:18.599 [2024-07-13 07:00:26.663074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.857 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.857 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90115 00:14:18.857 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:18.857 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90115 /var/tmp/bdevperf.sock 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90115 ']' 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.116 07:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:19.116 [2024-07-13 07:00:26.988616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:19.116 [2024-07-13 07:00:26.988718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90115 ] 00:14:19.116 [2024-07-13 07:00:27.126347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.375 [2024-07-13 07:00:27.206541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.940 07:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.940 07:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:19.940 07:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:20.506 Nvme0n1 00:14:20.506 07:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:20.506 [ 00:14:20.506 { 00:14:20.506 "aliases": [ 00:14:20.506 "adb0bfde-794a-4381-b32a-7d1e5ca94719" 00:14:20.506 ], 00:14:20.506 "assigned_rate_limits": { 00:14:20.506 "r_mbytes_per_sec": 0, 00:14:20.506 "rw_ios_per_sec": 0, 00:14:20.506 "rw_mbytes_per_sec": 0, 00:14:20.506 "w_mbytes_per_sec": 0 00:14:20.506 }, 00:14:20.506 "block_size": 4096, 00:14:20.506 "claimed": false, 00:14:20.506 "driver_specific": { 00:14:20.506 "mp_policy": "active_passive", 00:14:20.506 "nvme": [ 00:14:20.506 { 00:14:20.506 "ctrlr_data": { 00:14:20.506 "ana_reporting": false, 00:14:20.506 "cntlid": 1, 00:14:20.506 "firmware_revision": "24.09", 00:14:20.506 "model_number": "SPDK bdev Controller", 00:14:20.506 "multi_ctrlr": true, 00:14:20.506 "oacs": { 00:14:20.506 "firmware": 0, 00:14:20.506 "format": 0, 00:14:20.506 "ns_manage": 0, 00:14:20.506 "security": 0 00:14:20.506 }, 00:14:20.506 "serial_number": "SPDK0", 00:14:20.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.506 "vendor_id": "0x8086" 00:14:20.506 }, 00:14:20.506 "ns_data": { 00:14:20.506 "can_share": true, 00:14:20.506 "id": 1 00:14:20.506 }, 00:14:20.506 "trid": { 00:14:20.506 "adrfam": "IPv4", 00:14:20.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.506 "traddr": "10.0.0.2", 00:14:20.506 "trsvcid": "4420", 00:14:20.506 "trtype": "TCP" 00:14:20.506 }, 00:14:20.506 "vs": { 00:14:20.506 "nvme_version": "1.3" 00:14:20.506 } 00:14:20.506 } 00:14:20.506 ] 00:14:20.506 }, 00:14:20.506 "memory_domains": [ 00:14:20.506 { 00:14:20.506 "dma_device_id": "system", 00:14:20.506 "dma_device_type": 1 00:14:20.506 } 00:14:20.506 ], 00:14:20.506 "name": "Nvme0n1", 00:14:20.506 "num_blocks": 38912, 00:14:20.506 "product_name": "NVMe disk", 00:14:20.506 "supported_io_types": { 00:14:20.506 "abort": true, 00:14:20.506 "compare": true, 00:14:20.506 "compare_and_write": true, 00:14:20.506 "copy": true, 00:14:20.506 "flush": true, 00:14:20.506 "get_zone_info": false, 00:14:20.506 "nvme_admin": true, 00:14:20.506 "nvme_io": true, 00:14:20.506 "nvme_io_md": false, 00:14:20.506 "nvme_iov_md": false, 00:14:20.506 "read": true, 00:14:20.506 "reset": true, 00:14:20.506 "seek_data": false, 00:14:20.506 "seek_hole": false, 00:14:20.506 "unmap": true, 00:14:20.506 "write": true, 00:14:20.506 "write_zeroes": true, 00:14:20.506 "zcopy": false, 00:14:20.506 "zone_append": false, 00:14:20.506 "zone_management": false 00:14:20.506 }, 00:14:20.506 "uuid": "adb0bfde-794a-4381-b32a-7d1e5ca94719", 00:14:20.506 "zoned": false 00:14:20.506 } 00:14:20.506 ] 00:14:20.506 07:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90163 00:14:20.506 07:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.506 07:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:20.771 Running I/O for 10 seconds... 00:14:21.702 Latency(us) 00:14:21.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.702 Nvme0n1 : 1.00 9822.00 38.37 0.00 0.00 0.00 0.00 0.00 00:14:21.702 =================================================================================================================== 00:14:21.702 Total : 9822.00 38.37 0.00 0.00 0.00 0.00 0.00 00:14:21.702 00:14:22.636 07:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:22.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.636 Nvme0n1 : 2.00 9774.50 38.18 0.00 0.00 0.00 0.00 0.00 00:14:22.636 =================================================================================================================== 00:14:22.636 Total : 9774.50 38.18 0.00 0.00 0.00 0.00 0.00 00:14:22.636 00:14:22.894 true 00:14:22.894 07:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:22.894 07:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:23.152 07:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:23.152 07:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:23.152 07:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 90163 00:14:23.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.720 Nvme0n1 : 3.00 9755.00 38.11 0.00 0.00 0.00 0.00 0.00 00:14:23.720 =================================================================================================================== 00:14:23.720 Total : 9755.00 38.11 0.00 0.00 0.00 0.00 0.00 00:14:23.720 00:14:24.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.669 Nvme0n1 : 4.00 9728.75 38.00 0.00 0.00 0.00 0.00 0.00 00:14:24.669 =================================================================================================================== 00:14:24.669 Total : 9728.75 38.00 0.00 0.00 0.00 0.00 0.00 00:14:24.669 00:14:25.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.619 Nvme0n1 : 5.00 9437.40 36.86 0.00 0.00 0.00 0.00 0.00 00:14:25.619 =================================================================================================================== 00:14:25.619 Total : 9437.40 36.86 0.00 0.00 0.00 0.00 0.00 00:14:25.619 00:14:26.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.554 Nvme0n1 : 6.00 9136.50 35.69 0.00 0.00 0.00 0.00 0.00 00:14:26.554 =================================================================================================================== 00:14:26.554 Total : 9136.50 35.69 0.00 0.00 0.00 0.00 0.00 00:14:26.554 00:14:27.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.940 Nvme0n1 : 7.00 9096.57 35.53 0.00 0.00 0.00 0.00 0.00 00:14:27.940 =================================================================================================================== 00:14:27.940 Total : 9096.57 35.53 0.00 0.00 0.00 0.00 0.00 00:14:27.940 00:14:28.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.892 Nvme0n1 : 8.00 9072.38 35.44 0.00 0.00 0.00 0.00 0.00 00:14:28.892 =================================================================================================================== 00:14:28.892 Total : 9072.38 35.44 0.00 0.00 0.00 0.00 0.00 00:14:28.892 00:14:29.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.826 Nvme0n1 : 9.00 9055.00 35.37 0.00 0.00 0.00 0.00 0.00 00:14:29.826 =================================================================================================================== 00:14:29.826 Total : 9055.00 35.37 0.00 0.00 0.00 0.00 0.00 00:14:29.826 00:14:30.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.761 Nvme0n1 : 10.00 9029.20 35.27 0.00 0.00 0.00 0.00 0.00 00:14:30.761 =================================================================================================================== 00:14:30.761 Total : 9029.20 35.27 0.00 0.00 0.00 0.00 0.00 00:14:30.761 00:14:30.761 00:14:30.761 Latency(us) 00:14:30.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.761 Nvme0n1 : 10.01 9032.85 35.28 0.00 0.00 14165.70 3053.38 335544.32 00:14:30.761 =================================================================================================================== 00:14:30.761 Total : 9032.85 35.28 0.00 0.00 14165.70 3053.38 335544.32 00:14:30.761 0 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90115 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 90115 ']' 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 90115 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90115 00:14:30.761 killing process with pid 90115 00:14:30.761 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.761 00:14:30.761 Latency(us) 00:14:30.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.761 =================================================================================================================== 00:14:30.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90115' 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 90115 00:14:30.761 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 90115 00:14:31.022 07:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.285 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.543 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:31.543 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:31.801 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:31.801 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 89535 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 89535 00:14:31.802 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 89535 Killed "${NVMF_APP[@]}" "$@" 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=90328 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 90328 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 90328 ']' 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.802 07:00:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:31.802 [2024-07-13 07:00:39.781841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:31.802 [2024-07-13 07:00:39.781941] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.060 [2024-07-13 07:00:39.923343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.060 [2024-07-13 07:00:40.035400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.060 [2024-07-13 07:00:40.035473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.060 [2024-07-13 07:00:40.035484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.060 [2024-07-13 07:00:40.035493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.060 [2024-07-13 07:00:40.035500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.060 [2024-07-13 07:00:40.035526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.627 07:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.194 [2024-07-13 07:00:40.967779] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:33.194 [2024-07-13 07:00:40.968210] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:33.194 [2024-07-13 07:00:40.968374] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:33.194 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adb0bfde-794a-4381-b32a-7d1e5ca94719 -t 2000 00:14:33.452 [ 00:14:33.452 { 00:14:33.452 "aliases": [ 00:14:33.452 "lvs/lvol" 00:14:33.452 ], 00:14:33.452 "assigned_rate_limits": { 00:14:33.452 "r_mbytes_per_sec": 0, 00:14:33.452 "rw_ios_per_sec": 0, 00:14:33.452 "rw_mbytes_per_sec": 0, 00:14:33.452 "w_mbytes_per_sec": 0 00:14:33.452 }, 00:14:33.452 "block_size": 4096, 00:14:33.452 "claimed": false, 00:14:33.452 "driver_specific": { 00:14:33.452 "lvol": { 00:14:33.452 "base_bdev": "aio_bdev", 00:14:33.452 "clone": false, 00:14:33.452 "esnap_clone": false, 00:14:33.452 "lvol_store_uuid": "8c792aa3-4fe8-4920-8b03-503fcf13942b", 00:14:33.452 "num_allocated_clusters": 38, 00:14:33.452 "snapshot": false, 00:14:33.452 "thin_provision": false 00:14:33.452 } 00:14:33.452 }, 00:14:33.452 "name": "adb0bfde-794a-4381-b32a-7d1e5ca94719", 00:14:33.452 "num_blocks": 38912, 00:14:33.452 "product_name": "Logical Volume", 00:14:33.452 "supported_io_types": { 00:14:33.452 "abort": false, 00:14:33.452 "compare": false, 00:14:33.452 "compare_and_write": false, 00:14:33.452 "copy": false, 00:14:33.452 "flush": false, 00:14:33.452 "get_zone_info": false, 00:14:33.452 "nvme_admin": false, 00:14:33.452 "nvme_io": false, 00:14:33.452 "nvme_io_md": false, 00:14:33.452 "nvme_iov_md": false, 00:14:33.452 "read": true, 00:14:33.452 "reset": true, 00:14:33.452 "seek_data": true, 00:14:33.452 "seek_hole": true, 00:14:33.452 "unmap": true, 00:14:33.452 "write": true, 00:14:33.452 "write_zeroes": true, 00:14:33.452 "zcopy": false, 00:14:33.452 "zone_append": false, 00:14:33.452 "zone_management": false 00:14:33.452 }, 00:14:33.452 "uuid": "adb0bfde-794a-4381-b32a-7d1e5ca94719", 00:14:33.452 "zoned": false 00:14:33.452 } 00:14:33.452 ] 00:14:33.452 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:33.452 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:33.452 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:33.712 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:33.712 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:33.712 07:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:33.971 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:33.971 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:34.229 [2024-07-13 07:00:42.276979] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:34.492 2024/07/13 07:00:42 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8c792aa3-4fe8-4920-8b03-503fcf13942b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:34.492 request: 00:14:34.492 { 00:14:34.492 "method": "bdev_lvol_get_lvstores", 00:14:34.492 "params": { 00:14:34.492 "uuid": "8c792aa3-4fe8-4920-8b03-503fcf13942b" 00:14:34.492 } 00:14:34.492 } 00:14:34.492 Got JSON-RPC error response 00:14:34.492 GoRPCClient: error on JSON-RPC call 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.492 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.750 aio_bdev 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.750 07:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:35.007 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b adb0bfde-794a-4381-b32a-7d1e5ca94719 -t 2000 00:14:35.265 [ 00:14:35.265 { 00:14:35.265 "aliases": [ 00:14:35.265 "lvs/lvol" 00:14:35.265 ], 00:14:35.265 "assigned_rate_limits": { 00:14:35.265 "r_mbytes_per_sec": 0, 00:14:35.265 "rw_ios_per_sec": 0, 00:14:35.265 "rw_mbytes_per_sec": 0, 00:14:35.265 "w_mbytes_per_sec": 0 00:14:35.265 }, 00:14:35.265 "block_size": 4096, 00:14:35.265 "claimed": false, 00:14:35.265 "driver_specific": { 00:14:35.265 "lvol": { 00:14:35.265 "base_bdev": "aio_bdev", 00:14:35.265 "clone": false, 00:14:35.265 "esnap_clone": false, 00:14:35.265 "lvol_store_uuid": "8c792aa3-4fe8-4920-8b03-503fcf13942b", 00:14:35.265 "num_allocated_clusters": 38, 00:14:35.265 "snapshot": false, 00:14:35.265 "thin_provision": false 00:14:35.265 } 00:14:35.265 }, 00:14:35.265 "name": "adb0bfde-794a-4381-b32a-7d1e5ca94719", 00:14:35.265 "num_blocks": 38912, 00:14:35.265 "product_name": "Logical Volume", 00:14:35.265 "supported_io_types": { 00:14:35.265 "abort": false, 00:14:35.265 "compare": false, 00:14:35.265 "compare_and_write": false, 00:14:35.265 "copy": false, 00:14:35.265 "flush": false, 00:14:35.265 "get_zone_info": false, 00:14:35.265 "nvme_admin": false, 00:14:35.265 "nvme_io": false, 00:14:35.265 "nvme_io_md": false, 00:14:35.265 "nvme_iov_md": false, 00:14:35.265 "read": true, 00:14:35.265 "reset": true, 00:14:35.265 "seek_data": true, 00:14:35.265 "seek_hole": true, 00:14:35.265 "unmap": true, 00:14:35.265 "write": true, 00:14:35.265 "write_zeroes": true, 00:14:35.265 "zcopy": false, 00:14:35.265 "zone_append": false, 00:14:35.265 "zone_management": false 00:14:35.265 }, 00:14:35.265 "uuid": "adb0bfde-794a-4381-b32a-7d1e5ca94719", 00:14:35.265 "zoned": false 00:14:35.265 } 00:14:35.265 ] 00:14:35.265 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:35.265 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:35.265 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:35.525 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:35.525 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:35.525 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:35.798 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:35.798 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete adb0bfde-794a-4381-b32a-7d1e5ca94719 00:14:36.068 07:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c792aa3-4fe8-4920-8b03-503fcf13942b 00:14:36.326 07:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:36.584 07:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:36.851 00:14:36.851 real 0m20.509s 00:14:36.851 user 0m41.783s 00:14:36.851 sys 0m8.058s 00:14:36.851 07:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.851 07:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:36.852 ************************************ 00:14:36.852 END TEST lvs_grow_dirty 00:14:36.852 ************************************ 00:14:36.852 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:36.852 07:00:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:36.852 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:36.852 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:36.852 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:36.852 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:37.114 nvmf_trace.0 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.114 07:00:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:37.114 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.114 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:37.114 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.114 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.114 rmmod nvme_tcp 00:14:37.114 rmmod nvme_fabrics 00:14:37.114 rmmod nvme_keyring 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 90328 ']' 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 90328 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 90328 ']' 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 90328 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90328 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:37.377 killing process with pid 90328 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:37.377 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90328' 00:14:37.378 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 90328 00:14:37.378 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 90328 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.636 00:14:37.636 real 0m40.327s 00:14:37.636 user 1m4.214s 00:14:37.636 sys 0m11.026s 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.636 07:00:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:37.636 ************************************ 00:14:37.636 END TEST nvmf_lvs_grow 00:14:37.636 ************************************ 00:14:37.636 07:00:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:37.636 07:00:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:37.636 07:00:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.636 07:00:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.636 07:00:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.636 ************************************ 00:14:37.636 START TEST nvmf_bdev_io_wait 00:14:37.636 ************************************ 00:14:37.636 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:37.636 * Looking for test storage... 00:14:37.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.636 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.636 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.895 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:37.896 Cannot find device "nvmf_tgt_br" 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.896 Cannot find device "nvmf_tgt_br2" 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:37.896 Cannot find device "nvmf_tgt_br" 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:37.896 Cannot find device "nvmf_tgt_br2" 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.896 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.156 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.156 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:38.156 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:38.156 07:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:38.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:38.156 00:14:38.156 --- 10.0.0.2 ping statistics --- 00:14:38.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.156 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:38.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:38.156 00:14:38.156 --- 10.0.0.3 ping statistics --- 00:14:38.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.156 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:38.156 00:14:38.156 --- 10.0.0.1 ping statistics --- 00:14:38.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.156 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=90743 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 90743 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 90743 ']' 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.156 07:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.156 [2024-07-13 07:00:46.143778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:38.156 [2024-07-13 07:00:46.143875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.415 [2024-07-13 07:00:46.286502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.415 [2024-07-13 07:00:46.385689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.415 [2024-07-13 07:00:46.385758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.415 [2024-07-13 07:00:46.385769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.415 [2024-07-13 07:00:46.385777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.415 [2024-07-13 07:00:46.385784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.415 [2024-07-13 07:00:46.386496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.415 [2024-07-13 07:00:46.386683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.415 [2024-07-13 07:00:46.386749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.415 [2024-07-13 07:00:46.386758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 [2024-07-13 07:00:47.268097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 Malloc0 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:39.347 [2024-07-13 07:00:47.326777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=90796 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=90798 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:39.347 { 00:14:39.347 "params": { 00:14:39.347 "name": "Nvme$subsystem", 00:14:39.347 "trtype": "$TEST_TRANSPORT", 00:14:39.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.347 "adrfam": "ipv4", 00:14:39.347 "trsvcid": "$NVMF_PORT", 00:14:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.347 "hdgst": ${hdgst:-false}, 00:14:39.347 "ddgst": ${ddgst:-false} 00:14:39.347 }, 00:14:39.347 "method": "bdev_nvme_attach_controller" 00:14:39.347 } 00:14:39.347 EOF 00:14:39.347 )") 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=90800 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:39.347 { 00:14:39.347 "params": { 00:14:39.347 "name": "Nvme$subsystem", 00:14:39.347 "trtype": "$TEST_TRANSPORT", 00:14:39.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.347 "adrfam": "ipv4", 00:14:39.347 "trsvcid": "$NVMF_PORT", 00:14:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.347 "hdgst": ${hdgst:-false}, 00:14:39.347 "ddgst": ${ddgst:-false} 00:14:39.347 }, 00:14:39.347 "method": "bdev_nvme_attach_controller" 00:14:39.347 } 00:14:39.347 EOF 00:14:39.347 )") 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=90803 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:39.347 { 00:14:39.347 "params": { 00:14:39.347 "name": "Nvme$subsystem", 00:14:39.347 "trtype": "$TEST_TRANSPORT", 00:14:39.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.347 "adrfam": "ipv4", 00:14:39.347 "trsvcid": "$NVMF_PORT", 00:14:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.347 "hdgst": ${hdgst:-false}, 00:14:39.347 "ddgst": ${ddgst:-false} 00:14:39.347 }, 00:14:39.347 "method": "bdev_nvme_attach_controller" 00:14:39.347 } 00:14:39.347 EOF 00:14:39.347 )") 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:39.347 { 00:14:39.347 "params": { 00:14:39.347 "name": "Nvme$subsystem", 00:14:39.347 "trtype": "$TEST_TRANSPORT", 00:14:39.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.347 "adrfam": "ipv4", 00:14:39.347 "trsvcid": "$NVMF_PORT", 00:14:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.347 "hdgst": ${hdgst:-false}, 00:14:39.347 "ddgst": ${ddgst:-false} 00:14:39.347 }, 00:14:39.347 "method": "bdev_nvme_attach_controller" 00:14:39.347 } 00:14:39.347 EOF 00:14:39.347 )") 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:39.347 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:39.348 "params": { 00:14:39.348 "name": "Nvme1", 00:14:39.348 "trtype": "tcp", 00:14:39.348 "traddr": "10.0.0.2", 00:14:39.348 "adrfam": "ipv4", 00:14:39.348 "trsvcid": "4420", 00:14:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.348 "hdgst": false, 00:14:39.348 "ddgst": false 00:14:39.348 }, 00:14:39.348 "method": "bdev_nvme_attach_controller" 00:14:39.348 }' 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:39.348 "params": { 00:14:39.348 "name": "Nvme1", 00:14:39.348 "trtype": "tcp", 00:14:39.348 "traddr": "10.0.0.2", 00:14:39.348 "adrfam": "ipv4", 00:14:39.348 "trsvcid": "4420", 00:14:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.348 "hdgst": false, 00:14:39.348 "ddgst": false 00:14:39.348 }, 00:14:39.348 "method": "bdev_nvme_attach_controller" 00:14:39.348 }' 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:39.348 "params": { 00:14:39.348 "name": "Nvme1", 00:14:39.348 "trtype": "tcp", 00:14:39.348 "traddr": "10.0.0.2", 00:14:39.348 "adrfam": "ipv4", 00:14:39.348 "trsvcid": "4420", 00:14:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.348 "hdgst": false, 00:14:39.348 "ddgst": false 00:14:39.348 }, 00:14:39.348 "method": "bdev_nvme_attach_controller" 00:14:39.348 }' 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:39.348 "params": { 00:14:39.348 "name": "Nvme1", 00:14:39.348 "trtype": "tcp", 00:14:39.348 "traddr": "10.0.0.2", 00:14:39.348 "adrfam": "ipv4", 00:14:39.348 "trsvcid": "4420", 00:14:39.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.348 "hdgst": false, 00:14:39.348 "ddgst": false 00:14:39.348 }, 00:14:39.348 "method": "bdev_nvme_attach_controller" 00:14:39.348 }' 00:14:39.348 [2024-07-13 07:00:47.394147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:39.348 [2024-07-13 07:00:47.394865] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:39.348 07:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 90796 00:14:39.348 [2024-07-13 07:00:47.410442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:39.348 [2024-07-13 07:00:47.410648] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:39.348 [2024-07-13 07:00:47.421023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:39.348 [2024-07-13 07:00:47.421094] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:39.348 [2024-07-13 07:00:47.421366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:39.605 [2024-07-13 07:00:47.421433] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:39.605 [2024-07-13 07:00:47.604888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.605 [2024-07-13 07:00:47.676815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.863 [2024-07-13 07:00:47.685046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:39.863 [2024-07-13 07:00:47.755969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:39.863 [2024-07-13 07:00:47.760093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.863 [2024-07-13 07:00:47.831471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.863 [2024-07-13 07:00:47.843950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:39.863 Running I/O for 1 seconds... 00:14:39.863 [2024-07-13 07:00:47.904577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:39.863 Running I/O for 1 seconds... 00:14:40.119 Running I/O for 1 seconds... 00:14:40.119 Running I/O for 1 seconds... 00:14:41.054 00:14:41.054 Latency(us) 00:14:41.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.054 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:41.054 Nvme1n1 : 1.00 193058.41 754.13 0.00 0.00 660.41 260.65 990.49 00:14:41.054 =================================================================================================================== 00:14:41.054 Total : 193058.41 754.13 0.00 0.00 660.41 260.65 990.49 00:14:41.054 00:14:41.054 Latency(us) 00:14:41.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.054 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:41.054 Nvme1n1 : 1.02 6761.87 26.41 0.00 0.00 18673.66 2174.60 31457.28 00:14:41.054 =================================================================================================================== 00:14:41.054 Total : 6761.87 26.41 0.00 0.00 18673.66 2174.60 31457.28 00:14:41.054 00:14:41.054 Latency(us) 00:14:41.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.054 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:41.054 Nvme1n1 : 1.01 6496.63 25.38 0.00 0.00 19639.19 5838.66 36700.16 00:14:41.054 =================================================================================================================== 00:14:41.054 Total : 6496.63 25.38 0.00 0.00 19639.19 5838.66 36700.16 00:14:41.054 00:14:41.054 Latency(us) 00:14:41.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.054 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:41.054 Nvme1n1 : 1.01 7525.16 29.40 0.00 0.00 16913.12 10426.18 27048.49 00:14:41.054 =================================================================================================================== 00:14:41.054 Total : 7525.16 29.40 0.00 0.00 16913.12 10426.18 27048.49 00:14:41.315 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 90798 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 90800 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 90803 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.574 rmmod nvme_tcp 00:14:41.574 rmmod nvme_fabrics 00:14:41.574 rmmod nvme_keyring 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 90743 ']' 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 90743 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 90743 ']' 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 90743 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90743 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:41.574 killing process with pid 90743 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90743' 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 90743 00:14:41.574 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 90743 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:41.833 00:14:41.833 real 0m4.267s 00:14:41.833 user 0m18.895s 00:14:41.833 sys 0m1.944s 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.833 07:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:41.833 ************************************ 00:14:41.833 END TEST nvmf_bdev_io_wait 00:14:41.833 ************************************ 00:14:42.091 07:00:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:42.091 07:00:49 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:42.091 07:00:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:42.091 07:00:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.091 07:00:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:42.091 ************************************ 00:14:42.091 START TEST nvmf_queue_depth 00:14:42.091 ************************************ 00:14:42.091 07:00:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:42.091 * Looking for test storage... 00:14:42.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.091 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:42.092 Cannot find device "nvmf_tgt_br" 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.092 Cannot find device "nvmf_tgt_br2" 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:42.092 Cannot find device "nvmf_tgt_br" 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:42.092 Cannot find device "nvmf_tgt_br2" 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:14:42.092 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:42.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:14:42.349 00:14:42.349 --- 10.0.0.2 ping statistics --- 00:14:42.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.349 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:42.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:42.349 00:14:42.349 --- 10.0.0.3 ping statistics --- 00:14:42.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.349 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:42.349 00:14:42.349 --- 10.0.0.1 ping statistics --- 00:14:42.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.349 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=91031 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 91031 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91031 ']' 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.349 07:00:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:42.607 [2024-07-13 07:00:50.484269] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:42.607 [2024-07-13 07:00:50.484362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.607 [2024-07-13 07:00:50.627261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.865 [2024-07-13 07:00:50.712089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.865 [2024-07-13 07:00:50.712156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.865 [2024-07-13 07:00:50.712183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.865 [2024-07-13 07:00:50.712190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.865 [2024-07-13 07:00:50.712197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.865 [2024-07-13 07:00:50.712226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.433 [2024-07-13 07:00:51.485190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.433 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.692 Malloc0 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.692 [2024-07-13 07:00:51.554794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=91081 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 91081 /var/tmp/bdevperf.sock 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 91081 ']' 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.692 07:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.692 [2024-07-13 07:00:51.615332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:43.692 [2024-07-13 07:00:51.615429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91081 ] 00:14:43.692 [2024-07-13 07:00:51.756097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.949 [2024-07-13 07:00:51.889198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.515 07:00:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.515 07:00:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:44.515 07:00:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:44.515 07:00:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.515 07:00:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.773 NVMe0n1 00:14:44.773 07:00:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.773 07:00:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:44.773 Running I/O for 10 seconds... 00:14:54.771 00:14:54.771 Latency(us) 00:14:54.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.771 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:54.771 Verification LBA range: start 0x0 length 0x4000 00:14:54.771 NVMe0n1 : 10.08 9828.38 38.39 0.00 0.00 103758.16 25618.62 80073.08 00:14:54.771 =================================================================================================================== 00:14:54.771 Total : 9828.38 38.39 0.00 0.00 103758.16 25618.62 80073.08 00:14:54.771 0 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 91081 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91081 ']' 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91081 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91081 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.030 killing process with pid 91081 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91081' 00:14:55.030 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.030 00:14:55.030 Latency(us) 00:14:55.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.030 =================================================================================================================== 00:14:55.030 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91081 00:14:55.030 07:01:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91081 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.298 rmmod nvme_tcp 00:14:55.298 rmmod nvme_fabrics 00:14:55.298 rmmod nvme_keyring 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 91031 ']' 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 91031 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 91031 ']' 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 91031 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91031 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.298 killing process with pid 91031 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91031' 00:14:55.298 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 91031 00:14:55.299 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 91031 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:55.558 00:14:55.558 real 0m13.604s 00:14:55.558 user 0m23.020s 00:14:55.558 sys 0m2.398s 00:14:55.558 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.558 ************************************ 00:14:55.558 END TEST nvmf_queue_depth 00:14:55.558 ************************************ 00:14:55.559 07:01:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:55.559 07:01:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:55.559 07:01:03 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:55.559 07:01:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.559 07:01:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.559 07:01:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.559 ************************************ 00:14:55.559 START TEST nvmf_target_multipath 00:14:55.559 ************************************ 00:14:55.559 07:01:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:55.817 * Looking for test storage... 00:14:55.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.817 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:55.818 Cannot find device "nvmf_tgt_br" 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.818 Cannot find device "nvmf_tgt_br2" 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:55.818 Cannot find device "nvmf_tgt_br" 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:55.818 Cannot find device "nvmf_tgt_br2" 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.818 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.076 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.076 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.076 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.076 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.076 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:56.077 07:01:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:56.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:14:56.077 00:14:56.077 --- 10.0.0.2 ping statistics --- 00:14:56.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.077 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:56.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:14:56.077 00:14:56.077 --- 10.0.0.3 ping statistics --- 00:14:56.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.077 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:56.077 00:14:56.077 --- 10.0.0.1 ping statistics --- 00:14:56.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.077 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:56.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=91418 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 91418 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 91418 ']' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.077 07:01:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:56.077 [2024-07-13 07:01:04.144900] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:56.077 [2024-07-13 07:01:04.144993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.335 [2024-07-13 07:01:04.289756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.600 [2024-07-13 07:01:04.429121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.600 [2024-07-13 07:01:04.429539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.600 [2024-07-13 07:01:04.429865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.600 [2024-07-13 07:01:04.430029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.600 [2024-07-13 07:01:04.430072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.600 [2024-07-13 07:01:04.430382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.600 [2024-07-13 07:01:04.430446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.600 [2024-07-13 07:01:04.430537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.600 [2024-07-13 07:01:04.430541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.180 07:01:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.440 [2024-07-13 07:01:05.416891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.440 07:01:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:57.698 Malloc0 00:14:57.698 07:01:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:57.957 07:01:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.214 07:01:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.473 [2024-07-13 07:01:06.427859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.473 07:01:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:58.730 [2024-07-13 07:01:06.660150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.730 07:01:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:58.988 07:01:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:59.247 07:01:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.247 07:01:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.247 07:01:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.247 07:01:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:59.247 07:01:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=91563 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:15:01.147 07:01:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:01.147 [global] 00:15:01.147 thread=1 00:15:01.147 invalidate=1 00:15:01.147 rw=randrw 00:15:01.147 time_based=1 00:15:01.147 runtime=6 00:15:01.147 ioengine=libaio 00:15:01.147 direct=1 00:15:01.147 bs=4096 00:15:01.147 iodepth=128 00:15:01.147 norandommap=0 00:15:01.147 numjobs=1 00:15:01.147 00:15:01.147 verify_dump=1 00:15:01.147 verify_backlog=512 00:15:01.147 verify_state_save=0 00:15:01.147 do_verify=1 00:15:01.147 verify=crc32c-intel 00:15:01.147 [job0] 00:15:01.147 filename=/dev/nvme0n1 00:15:01.147 Could not set queue depth (nvme0n1) 00:15:01.414 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.414 fio-3.35 00:15:01.414 Starting 1 thread 00:15:02.387 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:02.387 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:02.644 07:01:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:04.017 07:01:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:04.017 07:01:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:04.017 07:01:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:04.017 07:01:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:04.017 07:01:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:04.276 07:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:05.649 07:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:05.649 07:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:05.649 07:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:05.649 07:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 91563 00:15:07.558 00:15:07.558 job0: (groupid=0, jobs=1): err= 0: pid=91584: Sat Jul 13 07:01:15 2024 00:15:07.558 read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(250MiB/6006msec) 00:15:07.558 slat (usec): min=5, max=7160, avg=53.71, stdev=239.10 00:15:07.558 clat (usec): min=1481, max=14768, avg=8180.00, stdev=1235.90 00:15:07.558 lat (usec): min=1492, max=14778, avg=8233.71, stdev=1245.46 00:15:07.558 clat percentiles (usec): 00:15:07.558 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7373], 00:15:07.558 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:15:07.558 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10421], 00:15:07.558 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13435], 99.95th=[13566], 00:15:07.558 | 99.99th=[14615] 00:15:07.558 bw ( KiB/s): min=11704, max=26728, per=52.76%, avg=22448.18, stdev=5594.65, samples=11 00:15:07.558 iops : min= 2926, max= 6682, avg=5612.00, stdev=1398.63, samples=11 00:15:07.558 write: IOPS=6326, BW=24.7MiB/s (25.9MB/s)(133MiB/5365msec); 0 zone resets 00:15:07.558 slat (usec): min=8, max=3252, avg=65.65, stdev=169.32 00:15:07.558 clat (usec): min=866, max=13903, avg=7047.60, stdev=993.87 00:15:07.558 lat (usec): min=925, max=13929, avg=7113.25, stdev=997.90 00:15:07.558 clat percentiles (usec): 00:15:07.558 | 1.00th=[ 4146], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6390], 00:15:07.558 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:15:07.558 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8455], 00:15:07.558 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12256], 99.95th=[12649], 00:15:07.558 | 99.99th=[13042] 00:15:07.558 bw ( KiB/s): min=12288, max=26459, per=88.74%, avg=22457.73, stdev=5177.07, samples=11 00:15:07.558 iops : min= 3072, max= 6614, avg=5614.36, stdev=1294.21, samples=11 00:15:07.558 lat (usec) : 1000=0.01% 00:15:07.558 lat (msec) : 2=0.01%, 4=0.30%, 10=95.17%, 20=4.52% 00:15:07.558 cpu : usr=6.00%, sys=22.08%, ctx=6259, majf=0, minf=96 00:15:07.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:07.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.558 issued rwts: total=63887,33941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.558 00:15:07.558 Run status group 0 (all jobs): 00:15:07.558 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=250MiB (262MB), run=6006-6006msec 00:15:07.558 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=133MiB (139MB), run=5365-5365msec 00:15:07.558 00:15:07.558 Disk stats (read/write): 00:15:07.558 nvme0n1: ios=63250/33089, merge=0/0, ticks=484708/218236, in_queue=702944, util=98.68% 00:15:07.558 07:01:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:07.816 07:01:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:08.074 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:08.074 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:08.074 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:08.074 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:08.074 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:08.074 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:08.075 07:01:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=91707 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:15:09.009 07:01:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:09.267 [global] 00:15:09.267 thread=1 00:15:09.267 invalidate=1 00:15:09.267 rw=randrw 00:15:09.267 time_based=1 00:15:09.267 runtime=6 00:15:09.267 ioengine=libaio 00:15:09.267 direct=1 00:15:09.267 bs=4096 00:15:09.267 iodepth=128 00:15:09.267 norandommap=0 00:15:09.267 numjobs=1 00:15:09.267 00:15:09.267 verify_dump=1 00:15:09.267 verify_backlog=512 00:15:09.267 verify_state_save=0 00:15:09.267 do_verify=1 00:15:09.267 verify=crc32c-intel 00:15:09.267 [job0] 00:15:09.267 filename=/dev/nvme0n1 00:15:09.267 Could not set queue depth (nvme0n1) 00:15:09.267 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.267 fio-3.35 00:15:09.267 Starting 1 thread 00:15:10.199 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:10.457 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:10.716 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:10.716 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:10.716 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:10.716 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:10.716 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:10.716 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:10.717 07:01:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:11.652 07:01:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:11.652 07:01:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.652 07:01:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:11.652 07:01:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:11.910 07:01:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.168 07:01:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:13.580 07:01:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:13.580 07:01:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.580 07:01:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:13.580 07:01:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 91707 00:15:15.494 00:15:15.494 job0: (groupid=0, jobs=1): err= 0: pid=91734: Sat Jul 13 07:01:23 2024 00:15:15.494 read: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(263MiB/6005msec) 00:15:15.494 slat (usec): min=3, max=4758, avg=43.78, stdev=207.11 00:15:15.494 clat (usec): min=349, max=19777, avg=7794.44, stdev=1968.76 00:15:15.494 lat (usec): min=372, max=19786, avg=7838.22, stdev=1974.64 00:15:15.494 clat percentiles (usec): 00:15:15.494 | 1.00th=[ 2573], 5.00th=[ 4293], 10.00th=[ 5669], 20.00th=[ 6783], 00:15:15.494 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8094], 00:15:15.494 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10159], 95.00th=[11076], 00:15:15.494 | 99.00th=[13698], 99.50th=[14746], 99.90th=[16909], 99.95th=[17433], 00:15:15.494 | 99.99th=[19530] 00:15:15.494 bw ( KiB/s): min= 8504, max=29328, per=52.81%, avg=23700.36, stdev=5959.41, samples=11 00:15:15.494 iops : min= 2126, max= 7332, avg=5925.09, stdev=1489.85, samples=11 00:15:15.494 write: IOPS=6640, BW=25.9MiB/s (27.2MB/s)(141MiB/5429msec); 0 zone resets 00:15:15.494 slat (usec): min=11, max=2049, avg=52.81, stdev=136.91 00:15:15.494 clat (usec): min=1170, max=15287, avg=6580.12, stdev=1641.51 00:15:15.494 lat (usec): min=1197, max=15306, avg=6632.93, stdev=1646.09 00:15:15.494 clat percentiles (usec): 00:15:15.494 | 1.00th=[ 2343], 5.00th=[ 3294], 10.00th=[ 4178], 20.00th=[ 5735], 00:15:15.494 | 30.00th=[ 6194], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6980], 00:15:15.494 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8225], 95.00th=[ 9110], 00:15:15.494 | 99.00th=[10814], 99.50th=[11731], 99.90th=[13829], 99.95th=[14353], 00:15:15.494 | 99.99th=[15008] 00:15:15.494 bw ( KiB/s): min= 8944, max=28560, per=89.29%, avg=23719.27, stdev=5755.96, samples=11 00:15:15.494 iops : min= 2236, max= 7140, avg=5929.82, stdev=1438.99, samples=11 00:15:15.494 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.03% 00:15:15.494 lat (msec) : 2=0.42%, 4=5.37%, 10=86.40%, 20=7.74% 00:15:15.494 cpu : usr=5.86%, sys=22.29%, ctx=6671, majf=0, minf=108 00:15:15.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:15.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.494 issued rwts: total=67378,36053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.494 00:15:15.494 Run status group 0 (all jobs): 00:15:15.494 READ: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=263MiB (276MB), run=6005-6005msec 00:15:15.494 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=141MiB (148MB), run=5429-5429msec 00:15:15.494 00:15:15.494 Disk stats (read/write): 00:15:15.494 nvme0n1: ios=66651/35161, merge=0/0, ticks=488277/217250, in_queue=705527, util=98.71% 00:15:15.494 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:15:15.752 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.010 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:16.010 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:16.010 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:16.010 07:01:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:15:16.010 07:01:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.010 07:01:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.010 rmmod nvme_tcp 00:15:16.010 rmmod nvme_fabrics 00:15:16.010 rmmod nvme_keyring 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 91418 ']' 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 91418 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 91418 ']' 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 91418 00:15:16.010 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91418 00:15:16.267 killing process with pid 91418 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91418' 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 91418 00:15:16.267 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 91418 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:16.534 00:15:16.534 real 0m20.860s 00:15:16.534 user 1m21.462s 00:15:16.534 sys 0m6.462s 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.534 07:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:16.534 ************************************ 00:15:16.534 END TEST nvmf_target_multipath 00:15:16.534 ************************************ 00:15:16.534 07:01:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.534 07:01:24 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:16.534 07:01:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.534 07:01:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.534 07:01:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.534 ************************************ 00:15:16.534 START TEST nvmf_zcopy 00:15:16.534 ************************************ 00:15:16.535 07:01:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:16.535 * Looking for test storage... 00:15:16.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.535 07:01:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.535 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.793 07:01:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.794 Cannot find device "nvmf_tgt_br" 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.794 Cannot find device "nvmf_tgt_br2" 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.794 Cannot find device "nvmf_tgt_br" 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.794 Cannot find device "nvmf_tgt_br2" 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.794 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:15:17.053 00:15:17.053 --- 10.0.0.2 ping statistics --- 00:15:17.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.053 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:17.053 00:15:17.053 --- 10.0.0.3 ping statistics --- 00:15:17.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.053 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:17.053 00:15:17.053 --- 10.0.0.1 ping statistics --- 00:15:17.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.053 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.053 07:01:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.053 07:01:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:17.053 07:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.053 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.053 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.053 07:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=92009 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 92009 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 92009 ']' 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.054 07:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 [2024-07-13 07:01:25.068106] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:17.054 [2024-07-13 07:01:25.068208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.312 [2024-07-13 07:01:25.210162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.312 [2024-07-13 07:01:25.302949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.312 [2024-07-13 07:01:25.303013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.312 [2024-07-13 07:01:25.303023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.312 [2024-07-13 07:01:25.303031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.312 [2024-07-13 07:01:25.303037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.312 [2024-07-13 07:01:25.303068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 [2024-07-13 07:01:26.077872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 [2024-07-13 07:01:26.093948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 malloc0 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:18.247 { 00:15:18.247 "params": { 00:15:18.247 "name": "Nvme$subsystem", 00:15:18.247 "trtype": "$TEST_TRANSPORT", 00:15:18.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.247 "adrfam": "ipv4", 00:15:18.247 "trsvcid": "$NVMF_PORT", 00:15:18.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.247 "hdgst": ${hdgst:-false}, 00:15:18.247 "ddgst": ${ddgst:-false} 00:15:18.247 }, 00:15:18.247 "method": "bdev_nvme_attach_controller" 00:15:18.247 } 00:15:18.247 EOF 00:15:18.247 )") 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:18.247 07:01:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:18.247 "params": { 00:15:18.247 "name": "Nvme1", 00:15:18.247 "trtype": "tcp", 00:15:18.247 "traddr": "10.0.0.2", 00:15:18.247 "adrfam": "ipv4", 00:15:18.247 "trsvcid": "4420", 00:15:18.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.247 "hdgst": false, 00:15:18.247 "ddgst": false 00:15:18.247 }, 00:15:18.247 "method": "bdev_nvme_attach_controller" 00:15:18.247 }' 00:15:18.247 [2024-07-13 07:01:26.193946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:18.247 [2024-07-13 07:01:26.194051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92060 ] 00:15:18.505 [2024-07-13 07:01:26.331920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.505 [2024-07-13 07:01:26.459327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.763 Running I/O for 10 seconds... 00:15:28.727 00:15:28.727 Latency(us) 00:15:28.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:28.727 Verification LBA range: start 0x0 length 0x1000 00:15:28.727 Nvme1n1 : 10.01 6518.38 50.92 0.00 0.00 19579.34 2025.66 27286.81 00:15:28.727 =================================================================================================================== 00:15:28.727 Total : 6518.38 50.92 0.00 0.00 19579.34 2025.66 27286.81 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=92182 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:28.985 { 00:15:28.985 "params": { 00:15:28.985 "name": "Nvme$subsystem", 00:15:28.985 "trtype": "$TEST_TRANSPORT", 00:15:28.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.985 "adrfam": "ipv4", 00:15:28.985 "trsvcid": "$NVMF_PORT", 00:15:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.985 "hdgst": ${hdgst:-false}, 00:15:28.985 "ddgst": ${ddgst:-false} 00:15:28.985 }, 00:15:28.985 "method": "bdev_nvme_attach_controller" 00:15:28.985 } 00:15:28.985 EOF 00:15:28.985 )") 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:28.985 [2024-07-13 07:01:36.958414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:36.958461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:28.985 07:01:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:28.985 "params": { 00:15:28.985 "name": "Nvme1", 00:15:28.985 "trtype": "tcp", 00:15:28.985 "traddr": "10.0.0.2", 00:15:28.985 "adrfam": "ipv4", 00:15:28.985 "trsvcid": "4420", 00:15:28.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.985 "hdgst": false, 00:15:28.985 "ddgst": false 00:15:28.985 }, 00:15:28.985 "method": "bdev_nvme_attach_controller" 00:15:28.985 }' 00:15:28.985 2024/07/13 07:01:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.985 [2024-07-13 07:01:36.970408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:36.970438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 2024/07/13 07:01:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.985 [2024-07-13 07:01:36.982395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:36.982423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 2024/07/13 07:01:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.985 [2024-07-13 07:01:36.992204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:28.985 [2024-07-13 07:01:36.992306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92182 ] 00:15:28.985 [2024-07-13 07:01:36.994412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:36.994436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 2024/07/13 07:01:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.985 [2024-07-13 07:01:37.006399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:37.006644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.985 [2024-07-13 07:01:37.018411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:37.018611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.985 [2024-07-13 07:01:37.030407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.985 [2024-07-13 07:01:37.030588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.985 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.986 [2024-07-13 07:01:37.042411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.986 [2024-07-13 07:01:37.042592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.986 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:28.986 [2024-07-13 07:01:37.054420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.986 [2024-07-13 07:01:37.054605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.986 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.066426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.066453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.078402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.078430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.090419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.090445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.102424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.102452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.114424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.114448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.121870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.245 [2024-07-13 07:01:37.126424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.126444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.138425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.138445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.150449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.150469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.162433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.162454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.174459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.174481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.186424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.186443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.198473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.198507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.210438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.210462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.222449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.222475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.230721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.245 [2024-07-13 07:01:37.234446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.234469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.246450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.246472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.258448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.258468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.270461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.270486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.282458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.282481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.294462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.294483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.306464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.306485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.245 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.245 [2024-07-13 07:01:37.318482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.245 [2024-07-13 07:01:37.318504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.330478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.330501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.342474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.342496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.354477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.354500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.366521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.366582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.378501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.378527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.390500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.390524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.402504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.402528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.414503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.414529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.426529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.426583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 Running I/O for 5 seconds... 00:15:29.504 [2024-07-13 07:01:37.438517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.438541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.458017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.458064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.472718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.472766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.488915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.488960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.505830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.505862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.523234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.523284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.504 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.504 [2024-07-13 07:01:37.538483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.504 [2024-07-13 07:01:37.538513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.505 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.505 [2024-07-13 07:01:37.549350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.505 [2024-07-13 07:01:37.549380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.505 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.505 [2024-07-13 07:01:37.564978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.505 [2024-07-13 07:01:37.565022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.505 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.581264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.581301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.597765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.597816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.613472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.613518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.623712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.623738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.637504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.637533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.653307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.653335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.670364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.670393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.686500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.686529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.704395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.704424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.721028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.721056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.737788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.737832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.753583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.753612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.771147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.771174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.787852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.787893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.804351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.804397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:29.763 [2024-07-13 07:01:37.822124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.763 [2024-07-13 07:01:37.822171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.763 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.838182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.838210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.856243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.856269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.870731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.870759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.886329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.886355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.904374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.904417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.919708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.919735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.931468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.931494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.949231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.949276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.963758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.963786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.033 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.033 [2024-07-13 07:01:37.980580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.033 [2024-07-13 07:01:37.980625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:37.996760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:37.996789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.014083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.014127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.030390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.030417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.047575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.047617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.063501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.063527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.078964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.078992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.094769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.094797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.034 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.034 [2024-07-13 07:01:38.105837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.034 [2024-07-13 07:01:38.105865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.120139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.120166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.131829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.131855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.147199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.147234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.157412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.157446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.172062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.172097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.188169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.188205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.199087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.199120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.213261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.213292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.230021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.230066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.245755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.245789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.256227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.256258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.270769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.270799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.280982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.281019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.295385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.295414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.311199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.311229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.327912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.327945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.297 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.297 [2024-07-13 07:01:38.344927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.297 [2024-07-13 07:01:38.344962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.298 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.298 [2024-07-13 07:01:38.359833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.298 [2024-07-13 07:01:38.359869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.298 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.376214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.376253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.394166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.394201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.408353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.408384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.424518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.424593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.440962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.440994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.457584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.457615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.472989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.473020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.490029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.490092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.505839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.505873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.523539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.523598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.538996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.539028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.556027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.556059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.571612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.571641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.556 [2024-07-13 07:01:38.589084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.556 [2024-07-13 07:01:38.589117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.556 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.557 [2024-07-13 07:01:38.604161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.557 [2024-07-13 07:01:38.604192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.557 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.557 [2024-07-13 07:01:38.621394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.557 [2024-07-13 07:01:38.621428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.557 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.636590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.636668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.646948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.646981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.662450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.662483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.679613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.679646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.694724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.694756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.713181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.713212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.727778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.727810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.815 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.815 [2024-07-13 07:01:38.744182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.815 [2024-07-13 07:01:38.744218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.760223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.760273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.777818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.777855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.792972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.793005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.808980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.809011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.825641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.825674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.842239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.842276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.859403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.859436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:30.816 [2024-07-13 07:01:38.874928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.816 [2024-07-13 07:01:38.874962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.816 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.891182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.891215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.905407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.905438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.920721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.920752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.932036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.932067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.947260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.947292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.957232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.957264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.971936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.971974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.076 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.076 [2024-07-13 07:01:38.982822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.076 [2024-07-13 07:01:38.982858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:38.997233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:38.997265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.012903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.012935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.030171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.030223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.046303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.046354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.062767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.062818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.079727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.079777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.095324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.095375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.112077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.112128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.128189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.128240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.077 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.077 [2024-07-13 07:01:39.145989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.077 [2024-07-13 07:01:39.146029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.161966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.162008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.178486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.178577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.194494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.194544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.205028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.205062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.219652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.219685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.236856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.236892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.339 [2024-07-13 07:01:39.252735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.339 [2024-07-13 07:01:39.252773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.339 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.264500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.264564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.281217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.281260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.295697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.295732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.311320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.311359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.328599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.328649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.343263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.343304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.359729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.359771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.377590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.377631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.392018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.392059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.340 [2024-07-13 07:01:39.408127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.340 [2024-07-13 07:01:39.408166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.340 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.599 [2024-07-13 07:01:39.423501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.599 [2024-07-13 07:01:39.423542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.599 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.599 [2024-07-13 07:01:39.439888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.599 [2024-07-13 07:01:39.439946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.599 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.599 [2024-07-13 07:01:39.456168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.599 [2024-07-13 07:01:39.456208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.599 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.599 [2024-07-13 07:01:39.472511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.599 [2024-07-13 07:01:39.472593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.599 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.599 [2024-07-13 07:01:39.488720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.488759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.506454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.506493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.521685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.521765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.539046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.539083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.554356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.554394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.569620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.569659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.580749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.580783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.597165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.597203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.612724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.612762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.623733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.623768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.639745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.639781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.600 [2024-07-13 07:01:39.656107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.600 [2024-07-13 07:01:39.656144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.600 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.673997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.674057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.688912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.688967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.703275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.703311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.719949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.719987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.736707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.736748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.752811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.752855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.770479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.770518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.785105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.785141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.801447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.801485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.816199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.816235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.833578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.833614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.848166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.848204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.863685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.863720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.873044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.873076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.888125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.888160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.899615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.899647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.917063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.917099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.859 [2024-07-13 07:01:39.932244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.859 [2024-07-13 07:01:39.932279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.859 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.118 [2024-07-13 07:01:39.942459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.118 [2024-07-13 07:01:39.942493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.118 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.118 [2024-07-13 07:01:39.956964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.118 [2024-07-13 07:01:39.957000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.118 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.118 [2024-07-13 07:01:39.974215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.118 [2024-07-13 07:01:39.974253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.118 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.118 [2024-07-13 07:01:39.989082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.118 [2024-07-13 07:01:39.989120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.118 2024/07/13 07:01:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.118 [2024-07-13 07:01:40.005406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.005447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.022316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.022350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.037516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.037564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.055093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.055129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.069294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.069330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.084709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.084746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.096043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.096075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.112767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.112815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.128208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.128244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.145734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.145786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.161088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.161129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.119 [2024-07-13 07:01:40.177326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.119 [2024-07-13 07:01:40.177381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.119 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.194862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.194918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.211632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.211697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.227816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.227869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.245894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.245954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.261101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.261152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.276236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.276288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.292234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.292281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.310992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.311046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.325387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.325429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.341672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.341751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.357659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.357697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.375617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.375655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.377 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.377 [2024-07-13 07:01:40.389922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.377 [2024-07-13 07:01:40.389963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.378 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.378 [2024-07-13 07:01:40.405935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.378 [2024-07-13 07:01:40.405978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.378 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.378 [2024-07-13 07:01:40.422684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.378 [2024-07-13 07:01:40.422724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.378 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.378 [2024-07-13 07:01:40.439478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.378 [2024-07-13 07:01:40.439523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.378 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.636 [2024-07-13 07:01:40.455686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.636 [2024-07-13 07:01:40.455736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.473546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.473593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.491247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.491281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.506580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.506627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.516109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.516138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.530383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.530414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.546019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.546066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.561945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.561989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.575954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.575999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.592574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.592606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.608833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.608878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.625227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.625260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.641597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.641649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.658922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.658970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.675818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.675866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.693051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.693086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.637 [2024-07-13 07:01:40.708658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.637 [2024-07-13 07:01:40.708696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.637 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.718845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.718879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.733835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.733871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.749960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.750010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.767679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.767730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.784276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.784327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.799786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.799822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.809895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.809929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.825435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.825486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.840624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.840674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.856373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.856422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.873072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.873121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.889461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.889498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.906098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.906170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.920699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.920740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.936727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.936762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.953647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.953683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.896 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.896 [2024-07-13 07:01:40.969406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.896 [2024-07-13 07:01:40.969461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:40.986464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:40.986520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.002144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.002200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.011607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.011642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.026852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.026909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.044627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.044661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.060497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.060535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.078673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.078733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.093191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.093247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.104694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.104749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.122232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.122287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.136888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.136940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.153636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.153668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.169135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.169186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.178692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.178725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.192928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.192964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.155 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.155 [2024-07-13 07:01:41.208479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.155 [2024-07-13 07:01:41.208530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.156 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.156 [2024-07-13 07:01:41.217760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.156 [2024-07-13 07:01:41.217795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.156 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.233865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.233924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.250143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.250201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.265434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.265465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.282524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.282578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.296577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.296607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.314471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.314507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.328704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.328735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.344638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.344684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.362225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.362259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.378196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.378228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.414 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.414 [2024-07-13 07:01:41.395162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.414 [2024-07-13 07:01:41.395214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.415 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.415 [2024-07-13 07:01:41.411932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.415 [2024-07-13 07:01:41.411978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.415 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.415 [2024-07-13 07:01:41.427727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.415 [2024-07-13 07:01:41.427761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.415 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.415 [2024-07-13 07:01:41.443154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.415 [2024-07-13 07:01:41.443205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.415 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.415 [2024-07-13 07:01:41.461661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.415 [2024-07-13 07:01:41.461753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.415 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.415 [2024-07-13 07:01:41.477463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.415 [2024-07-13 07:01:41.477515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.415 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.673 [2024-07-13 07:01:41.494068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.673 [2024-07-13 07:01:41.494136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.673 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.673 [2024-07-13 07:01:41.509990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.673 [2024-07-13 07:01:41.510041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.673 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.673 [2024-07-13 07:01:41.525305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.673 [2024-07-13 07:01:41.525336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.673 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.673 [2024-07-13 07:01:41.540561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.673 [2024-07-13 07:01:41.540608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.552603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.552632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.568541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.568587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.585685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.585756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.601488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.601518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.613246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.613274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.629749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.629779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.647003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.647031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.663330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.663358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.679540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.679581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.691067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.691097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.708352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.708383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.723679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.723712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.674 [2024-07-13 07:01:41.733013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.674 [2024-07-13 07:01:41.733042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.674 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.748367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.748402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.764655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.764705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.781034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.781079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.791718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.791752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.806670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.806716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.823030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.823064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.834784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.834815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.850586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.850635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.867686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.867722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.883807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.883853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.901544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.901594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.916910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.916960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.934006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.934059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.950562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.950615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.964997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.965046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:41.980941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:41.980991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.933 [2024-07-13 07:01:42.000085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.933 [2024-07-13 07:01:42.000138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.933 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.014825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.014873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.033031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.033082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.047481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.047535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.057567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.057616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.071590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.071626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.086387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.086438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.103985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.104039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.119088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.119139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.135046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.192 [2024-07-13 07:01:42.135101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.192 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.192 [2024-07-13 07:01:42.151691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.151729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.167325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.167377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.184588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.184641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.198946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.198980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.214979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.215030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.232659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.232710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.248440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.248478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.193 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.193 [2024-07-13 07:01:42.264372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.193 [2024-07-13 07:01:42.264428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.451 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.451 [2024-07-13 07:01:42.281008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.451 [2024-07-13 07:01:42.281054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.451 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.451 [2024-07-13 07:01:42.297331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.451 [2024-07-13 07:01:42.297363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.314594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.314640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.331997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.332059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.348455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.348508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.365118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.365154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.382229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.382281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.397542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.397587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.413631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.413657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.429385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.429412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.444899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.444927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 00:15:34.452 Latency(us) 00:15:34.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.452 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:34.452 Nvme1n1 : 5.01 12041.00 94.07 0.00 0.00 10616.86 4230.05 20137.43 00:15:34.452 =================================================================================================================== 00:15:34.452 Total : 12041.00 94.07 0.00 0.00 10616.86 4230.05 20137.43 00:15:34.452 [2024-07-13 07:01:42.453753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.453783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.465733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.465776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.477730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.477774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.489764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.489789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.501763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.501786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.513752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.513776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.452 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.452 [2024-07-13 07:01:42.525751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.452 [2024-07-13 07:01:42.525775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.537764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.537786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.549778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.549799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.561766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.561789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.573784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.573806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.585798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.585820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.597806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.597828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.609779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.609802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.621789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.621810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.633785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.633807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.711 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.711 [2024-07-13 07:01:42.645793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.711 [2024-07-13 07:01:42.645820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 [2024-07-13 07:01:42.657793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.712 [2024-07-13 07:01:42.657814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 [2024-07-13 07:01:42.669803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.712 [2024-07-13 07:01:42.669825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 [2024-07-13 07:01:42.681824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.712 [2024-07-13 07:01:42.681846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 [2024-07-13 07:01:42.693815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.712 [2024-07-13 07:01:42.693845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 [2024-07-13 07:01:42.705818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.712 [2024-07-13 07:01:42.705842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 [2024-07-13 07:01:42.717818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.712 [2024-07-13 07:01:42.717840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.712 2024/07/13 07:01:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.712 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (92182) - No such process 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 92182 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:34.712 delay0 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.712 07:01:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:34.970 [2024-07-13 07:01:42.914257] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:43.077 Initializing NVMe Controllers 00:15:43.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:43.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:43.077 Initialization complete. Launching workers. 00:15:43.077 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 19020 00:15:43.077 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19189, failed to submit 91 00:15:43.077 success 19080, unsuccess 109, failed 0 00:15:43.077 07:01:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:43.077 07:01:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:43.077 07:01:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.077 07:01:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.077 rmmod nvme_tcp 00:15:43.077 rmmod nvme_fabrics 00:15:43.077 rmmod nvme_keyring 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 92009 ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 92009 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 92009 ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 92009 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92009 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:43.077 killing process with pid 92009 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92009' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 92009 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 92009 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:43.077 00:15:43.077 real 0m25.821s 00:15:43.077 user 0m39.744s 00:15:43.077 sys 0m8.646s 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.077 07:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:43.077 ************************************ 00:15:43.077 END TEST nvmf_zcopy 00:15:43.077 ************************************ 00:15:43.077 07:01:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:43.077 07:01:50 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:43.077 07:01:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.077 07:01:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.077 07:01:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.077 ************************************ 00:15:43.077 START TEST nvmf_nmic 00:15:43.077 ************************************ 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:43.077 * Looking for test storage... 00:15:43.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.077 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:43.078 Cannot find device "nvmf_tgt_br" 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.078 Cannot find device "nvmf_tgt_br2" 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:43.078 Cannot find device "nvmf_tgt_br" 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:43.078 Cannot find device "nvmf_tgt_br2" 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:43.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:43.078 00:15:43.078 --- 10.0.0.2 ping statistics --- 00:15:43.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.078 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:43.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:43.078 00:15:43.078 --- 10.0.0.3 ping statistics --- 00:15:43.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.078 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:43.078 00:15:43.078 --- 10.0.0.1 ping statistics --- 00:15:43.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.078 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=92503 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 92503 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 92503 ']' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.078 07:01:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:43.078 [2024-07-13 07:01:50.928685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:43.078 [2024-07-13 07:01:50.928826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.078 [2024-07-13 07:01:51.080605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.337 [2024-07-13 07:01:51.202238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.337 [2024-07-13 07:01:51.202310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.337 [2024-07-13 07:01:51.202324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.337 [2024-07-13 07:01:51.202335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.337 [2024-07-13 07:01:51.202345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.337 [2024-07-13 07:01:51.202511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.337 [2024-07-13 07:01:51.202950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.337 [2024-07-13 07:01:51.203724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.337 [2024-07-13 07:01:51.203739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:43.905 [2024-07-13 07:01:51.960450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.905 07:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 Malloc0 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 [2024-07-13 07:01:52.030224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.164 test case1: single bdev can't be used in multiple subsystems 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 [2024-07-13 07:01:52.054061] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:44.164 [2024-07-13 07:01:52.054108] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:44.164 [2024-07-13 07:01:52.054119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.164 2024/07/13 07:01:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.164 request: 00:15:44.164 { 00:15:44.164 "method": "nvmf_subsystem_add_ns", 00:15:44.164 "params": { 00:15:44.164 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:44.164 "namespace": { 00:15:44.164 "bdev_name": "Malloc0", 00:15:44.164 "no_auto_visible": false 00:15:44.164 } 00:15:44.164 } 00:15:44.164 } 00:15:44.164 Got JSON-RPC error response 00:15:44.164 GoRPCClient: error on JSON-RPC call 00:15:44.164 Adding namespace failed - expected result. 00:15:44.164 test case2: host connect to nvmf target in multiple paths 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 [2024-07-13 07:01:52.066169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.164 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.423 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:44.423 07:01:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:44.423 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:44.423 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.423 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:44.423 07:01:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:46.955 07:01:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:46.955 07:01:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:46.955 07:01:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.955 07:01:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:46.955 07:01:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.956 07:01:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:46.956 07:01:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:46.956 [global] 00:15:46.956 thread=1 00:15:46.956 invalidate=1 00:15:46.956 rw=write 00:15:46.956 time_based=1 00:15:46.956 runtime=1 00:15:46.956 ioengine=libaio 00:15:46.956 direct=1 00:15:46.956 bs=4096 00:15:46.956 iodepth=1 00:15:46.956 norandommap=0 00:15:46.956 numjobs=1 00:15:46.956 00:15:46.956 verify_dump=1 00:15:46.956 verify_backlog=512 00:15:46.956 verify_state_save=0 00:15:46.956 do_verify=1 00:15:46.956 verify=crc32c-intel 00:15:46.956 [job0] 00:15:46.956 filename=/dev/nvme0n1 00:15:46.956 Could not set queue depth (nvme0n1) 00:15:46.956 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.956 fio-3.35 00:15:46.956 Starting 1 thread 00:15:47.891 00:15:47.891 job0: (groupid=0, jobs=1): err= 0: pid=92613: Sat Jul 13 07:01:55 2024 00:15:47.891 read: IOPS=2884, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:15:47.891 slat (nsec): min=12580, max=64058, avg=15745.78, stdev=4998.27 00:15:47.891 clat (usec): min=130, max=589, avg=174.10, stdev=25.86 00:15:47.891 lat (usec): min=144, max=603, avg=189.85, stdev=26.69 00:15:47.891 clat percentiles (usec): 00:15:47.891 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:15:47.891 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:15:47.891 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 217], 00:15:47.891 | 99.00th=[ 241], 99.50th=[ 273], 99.90th=[ 424], 99.95th=[ 433], 00:15:47.891 | 99.99th=[ 586] 00:15:47.891 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:47.891 slat (usec): min=15, max=175, avg=23.08, stdev= 7.70 00:15:47.891 clat (usec): min=88, max=667, avg=120.40, stdev=22.94 00:15:47.891 lat (usec): min=107, max=688, avg=143.47, stdev=24.79 00:15:47.891 clat percentiles (usec): 00:15:47.891 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 105], 00:15:47.891 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 119], 00:15:47.891 | 70.00th=[ 126], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 155], 00:15:47.891 | 99.00th=[ 178], 99.50th=[ 217], 99.90th=[ 289], 99.95th=[ 465], 00:15:47.891 | 99.99th=[ 668] 00:15:47.891 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:15:47.891 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:47.891 lat (usec) : 100=3.68%, 250=95.82%, 500=0.47%, 750=0.03% 00:15:47.891 cpu : usr=2.50%, sys=8.50%, ctx=5959, majf=0, minf=2 00:15:47.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.891 issued rwts: total=2887,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.891 00:15:47.891 Run status group 0 (all jobs): 00:15:47.891 READ: bw=11.3MiB/s (11.8MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=11.3MiB (11.8MB), run=1001-1001msec 00:15:47.891 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:15:47.891 00:15:47.891 Disk stats (read/write): 00:15:47.891 nvme0n1: ios=2610/2800, merge=0/0, ticks=482/365, in_queue=847, util=91.28% 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.891 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.892 rmmod nvme_tcp 00:15:47.892 rmmod nvme_fabrics 00:15:47.892 rmmod nvme_keyring 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 92503 ']' 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 92503 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 92503 ']' 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 92503 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92503 00:15:47.892 killing process with pid 92503 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92503' 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 92503 00:15:47.892 07:01:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 92503 00:15:48.459 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.459 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.460 00:15:48.460 real 0m5.929s 00:15:48.460 user 0m19.851s 00:15:48.460 sys 0m1.295s 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.460 ************************************ 00:15:48.460 END TEST nvmf_nmic 00:15:48.460 07:01:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.460 ************************************ 00:15:48.460 07:01:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:48.460 07:01:56 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:48.460 07:01:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:48.460 07:01:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.460 07:01:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.460 ************************************ 00:15:48.460 START TEST nvmf_fio_target 00:15:48.460 ************************************ 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:48.460 * Looking for test storage... 00:15:48.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:48.460 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:48.720 Cannot find device "nvmf_tgt_br" 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.720 Cannot find device "nvmf_tgt_br2" 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:48.720 Cannot find device "nvmf_tgt_br" 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:48.720 Cannot find device "nvmf_tgt_br2" 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.720 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.978 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:48.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:15:48.979 00:15:48.979 --- 10.0.0.2 ping statistics --- 00:15:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.979 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:48.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:48.979 00:15:48.979 --- 10.0.0.3 ping statistics --- 00:15:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.979 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:48.979 00:15:48.979 --- 10.0.0.1 ping statistics --- 00:15:48.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.979 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=92794 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 92794 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 92794 ']' 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.979 07:01:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.979 [2024-07-13 07:01:56.940300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:48.979 [2024-07-13 07:01:56.940403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.237 [2024-07-13 07:01:57.081743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.237 [2024-07-13 07:01:57.198910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.237 [2024-07-13 07:01:57.199247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.237 [2024-07-13 07:01:57.199419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.237 [2024-07-13 07:01:57.199470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.237 [2024-07-13 07:01:57.199619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.237 [2024-07-13 07:01:57.200252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.237 [2024-07-13 07:01:57.200421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.237 [2024-07-13 07:01:57.201247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.237 [2024-07-13 07:01:57.201296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.183 07:01:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:50.183 [2024-07-13 07:01:58.232434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.441 07:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.699 07:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:50.699 07:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:50.958 07:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:50.958 07:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.217 07:01:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:51.217 07:01:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.475 07:01:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:51.475 07:01:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:51.734 07:01:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.992 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:51.992 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.253 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:52.253 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.515 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:52.515 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:52.774 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.032 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:53.032 07:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.291 07:02:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:53.291 07:02:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:53.549 07:02:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.807 [2024-07-13 07:02:01.767821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.807 07:02:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:54.064 07:02:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:54.322 07:02:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.580 07:02:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:54.580 07:02:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:54.580 07:02:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.580 07:02:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:54.580 07:02:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:54.580 07:02:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:56.480 07:02:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:56.480 [global] 00:15:56.480 thread=1 00:15:56.480 invalidate=1 00:15:56.480 rw=write 00:15:56.480 time_based=1 00:15:56.480 runtime=1 00:15:56.480 ioengine=libaio 00:15:56.480 direct=1 00:15:56.480 bs=4096 00:15:56.480 iodepth=1 00:15:56.480 norandommap=0 00:15:56.480 numjobs=1 00:15:56.480 00:15:56.480 verify_dump=1 00:15:56.480 verify_backlog=512 00:15:56.480 verify_state_save=0 00:15:56.480 do_verify=1 00:15:56.480 verify=crc32c-intel 00:15:56.480 [job0] 00:15:56.480 filename=/dev/nvme0n1 00:15:56.480 [job1] 00:15:56.480 filename=/dev/nvme0n2 00:15:56.480 [job2] 00:15:56.480 filename=/dev/nvme0n3 00:15:56.480 [job3] 00:15:56.480 filename=/dev/nvme0n4 00:15:56.738 Could not set queue depth (nvme0n1) 00:15:56.738 Could not set queue depth (nvme0n2) 00:15:56.738 Could not set queue depth (nvme0n3) 00:15:56.738 Could not set queue depth (nvme0n4) 00:15:56.738 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:56.738 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:56.738 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:56.738 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:56.738 fio-3.35 00:15:56.738 Starting 4 threads 00:15:58.115 00:15:58.115 job0: (groupid=0, jobs=1): err= 0: pid=93094: Sat Jul 13 07:02:05 2024 00:15:58.115 read: IOPS=1163, BW=4655KiB/s (4767kB/s)(4660KiB/1001msec) 00:15:58.115 slat (nsec): min=18639, max=78186, avg=33101.07, stdev=9629.69 00:15:58.115 clat (usec): min=210, max=2705, avg=385.71, stdev=81.52 00:15:58.115 lat (usec): min=248, max=2727, avg=418.82, stdev=80.46 00:15:58.115 clat percentiles (usec): 00:15:58.115 | 1.00th=[ 293], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 351], 00:15:58.115 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:15:58.115 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 465], 00:15:58.115 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 709], 99.95th=[ 2704], 00:15:58.115 | 99.99th=[ 2704] 00:15:58.115 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:58.115 slat (usec): min=27, max=222, avg=42.23, stdev=10.31 00:15:58.115 clat (usec): min=133, max=664, avg=284.52, stdev=57.98 00:15:58.115 lat (usec): min=169, max=698, avg=326.74, stdev=57.82 00:15:58.115 clat percentiles (usec): 00:15:58.115 | 1.00th=[ 178], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 241], 00:15:58.115 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 281], 00:15:58.115 | 70.00th=[ 297], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 404], 00:15:58.115 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 562], 99.95th=[ 668], 00:15:58.115 | 99.99th=[ 668] 00:15:58.115 bw ( KiB/s): min= 6904, max= 6904, per=23.57%, avg=6904.00, stdev= 0.00, samples=1 00:15:58.115 iops : min= 1726, max= 1726, avg=1726.00, stdev= 0.00, samples=1 00:15:58.115 lat (usec) : 250=16.36%, 500=83.04%, 750=0.56% 00:15:58.115 lat (msec) : 4=0.04% 00:15:58.115 cpu : usr=2.60%, sys=7.40%, ctx=2704, majf=0, minf=11 00:15:58.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.115 issued rwts: total=1165,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.115 job1: (groupid=0, jobs=1): err= 0: pid=93095: Sat Jul 13 07:02:05 2024 00:15:58.115 read: IOPS=1156, BW=4627KiB/s (4738kB/s)(4632KiB/1001msec) 00:15:58.115 slat (nsec): min=16466, max=93062, avg=22483.50, stdev=5997.11 00:15:58.115 clat (usec): min=161, max=649, avg=396.36, stdev=44.43 00:15:58.115 lat (usec): min=183, max=694, avg=418.84, stdev=44.28 00:15:58.115 clat percentiles (usec): 00:15:58.115 | 1.00th=[ 314], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 367], 00:15:58.115 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:15:58.115 | 70.00th=[ 412], 80.00th=[ 437], 90.00th=[ 453], 95.00th=[ 465], 00:15:58.115 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 652], 00:15:58.116 | 99.99th=[ 652] 00:15:58.116 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:58.116 slat (usec): min=26, max=116, avg=41.22, stdev= 8.78 00:15:58.116 clat (usec): min=142, max=2483, avg=288.82, stdev=82.53 00:15:58.116 lat (usec): min=183, max=2517, avg=330.04, stdev=82.21 00:15:58.116 clat percentiles (usec): 00:15:58.116 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 243], 00:15:58.116 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:15:58.116 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 379], 95.00th=[ 400], 00:15:58.116 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 963], 99.95th=[ 2474], 00:15:58.116 | 99.99th=[ 2474] 00:15:58.116 bw ( KiB/s): min= 6776, max= 6776, per=23.13%, avg=6776.00, stdev= 0.00, samples=1 00:15:58.116 iops : min= 1694, max= 1694, avg=1694.00, stdev= 0.00, samples=1 00:15:58.116 lat (usec) : 250=15.48%, 500=83.78%, 750=0.67%, 1000=0.04% 00:15:58.116 lat (msec) : 4=0.04% 00:15:58.116 cpu : usr=1.60%, sys=6.80%, ctx=2695, majf=0, minf=10 00:15:58.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.116 issued rwts: total=1158,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.116 job2: (groupid=0, jobs=1): err= 0: pid=93096: Sat Jul 13 07:02:05 2024 00:15:58.116 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.116 slat (nsec): min=14849, max=60489, avg=18761.97, stdev=5350.21 00:15:58.116 clat (usec): min=193, max=2076, avg=242.05, stdev=50.11 00:15:58.116 lat (usec): min=209, max=2095, avg=260.81, stdev=50.39 00:15:58.116 clat percentiles (usec): 00:15:58.116 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:15:58.116 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:15:58.116 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:15:58.116 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 709], 99.95th=[ 1029], 00:15:58.116 | 99.99th=[ 2073] 00:15:58.116 write: IOPS=2088, BW=8356KiB/s (8556kB/s)(8364KiB/1001msec); 0 zone resets 00:15:58.116 slat (usec): min=21, max=121, avg=28.36, stdev= 8.64 00:15:58.116 clat (usec): min=148, max=479, avg=190.25, stdev=22.20 00:15:58.116 lat (usec): min=171, max=601, avg=218.62, stdev=25.89 00:15:58.116 clat percentiles (usec): 00:15:58.116 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:15:58.116 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:15:58.116 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 231], 00:15:58.116 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 297], 00:15:58.116 | 99.99th=[ 482] 00:15:58.116 bw ( KiB/s): min= 8192, max= 8192, per=27.97%, avg=8192.00, stdev= 0.00, samples=1 00:15:58.116 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:58.116 lat (usec) : 250=84.68%, 500=15.25%, 750=0.02% 00:15:58.116 lat (msec) : 2=0.02%, 4=0.02% 00:15:58.116 cpu : usr=2.40%, sys=6.50%, ctx=4139, majf=0, minf=9 00:15:58.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.116 issued rwts: total=2048,2091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.116 job3: (groupid=0, jobs=1): err= 0: pid=93097: Sat Jul 13 07:02:05 2024 00:15:58.116 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:58.116 slat (nsec): min=12723, max=54349, avg=16297.56, stdev=4033.67 00:15:58.116 clat (usec): min=175, max=820, avg=239.62, stdev=28.93 00:15:58.116 lat (usec): min=189, max=861, avg=255.91, stdev=29.50 00:15:58.116 clat percentiles (usec): 00:15:58.116 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 219], 00:15:58.116 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:15:58.116 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 285], 00:15:58.116 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 383], 99.95th=[ 545], 00:15:58.116 | 99.99th=[ 824] 00:15:58.116 write: IOPS=2164, BW=8659KiB/s (8867kB/s)(8668KiB/1001msec); 0 zone resets 00:15:58.116 slat (usec): min=18, max=126, avg=25.00, stdev= 6.86 00:15:58.116 clat (usec): min=119, max=963, avg=190.68, stdev=31.06 00:15:58.116 lat (usec): min=139, max=1001, avg=215.68, stdev=32.55 00:15:58.116 clat percentiles (usec): 00:15:58.116 | 1.00th=[ 143], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:15:58.116 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:15:58.116 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 235], 00:15:58.116 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 379], 99.95th=[ 578], 00:15:58.116 | 99.99th=[ 963] 00:15:58.116 bw ( KiB/s): min= 8256, max= 8256, per=28.19%, avg=8256.00, stdev= 0.00, samples=1 00:15:58.116 iops : min= 2064, max= 2064, avg=2064.00, stdev= 0.00, samples=1 00:15:58.116 lat (usec) : 250=83.84%, 500=16.06%, 750=0.05%, 1000=0.05% 00:15:58.116 cpu : usr=2.30%, sys=5.90%, ctx=4218, majf=0, minf=5 00:15:58.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.116 issued rwts: total=2048,2167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.116 00:15:58.116 Run status group 0 (all jobs): 00:15:58.116 READ: bw=25.0MiB/s (26.3MB/s), 4627KiB/s-8184KiB/s (4738kB/s-8380kB/s), io=25.1MiB (26.3MB), run=1001-1001msec 00:15:58.116 WRITE: bw=28.6MiB/s (30.0MB/s), 6138KiB/s-8659KiB/s (6285kB/s-8867kB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:15:58.116 00:15:58.116 Disk stats (read/write): 00:15:58.116 nvme0n1: ios=1074/1278, merge=0/0, ticks=431/391, in_queue=822, util=87.98% 00:15:58.116 nvme0n2: ios=1072/1262, merge=0/0, ticks=456/387, in_queue=843, util=90.07% 00:15:58.116 nvme0n3: ios=1593/2048, merge=0/0, ticks=423/411, in_queue=834, util=89.66% 00:15:58.116 nvme0n4: ios=1606/2048, merge=0/0, ticks=389/416, in_queue=805, util=89.70% 00:15:58.116 07:02:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:58.116 [global] 00:15:58.116 thread=1 00:15:58.116 invalidate=1 00:15:58.116 rw=randwrite 00:15:58.116 time_based=1 00:15:58.116 runtime=1 00:15:58.116 ioengine=libaio 00:15:58.116 direct=1 00:15:58.116 bs=4096 00:15:58.116 iodepth=1 00:15:58.116 norandommap=0 00:15:58.116 numjobs=1 00:15:58.116 00:15:58.116 verify_dump=1 00:15:58.116 verify_backlog=512 00:15:58.116 verify_state_save=0 00:15:58.116 do_verify=1 00:15:58.116 verify=crc32c-intel 00:15:58.116 [job0] 00:15:58.116 filename=/dev/nvme0n1 00:15:58.116 [job1] 00:15:58.116 filename=/dev/nvme0n2 00:15:58.116 [job2] 00:15:58.116 filename=/dev/nvme0n3 00:15:58.116 [job3] 00:15:58.116 filename=/dev/nvme0n4 00:15:58.116 Could not set queue depth (nvme0n1) 00:15:58.116 Could not set queue depth (nvme0n2) 00:15:58.116 Could not set queue depth (nvme0n3) 00:15:58.116 Could not set queue depth (nvme0n4) 00:15:58.116 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.116 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.116 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.116 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.116 fio-3.35 00:15:58.116 Starting 4 threads 00:15:59.493 00:15:59.494 job0: (groupid=0, jobs=1): err= 0: pid=93152: Sat Jul 13 07:02:07 2024 00:15:59.494 read: IOPS=1480, BW=5922KiB/s (6064kB/s)(5928KiB/1001msec) 00:15:59.494 slat (nsec): min=18147, max=72077, avg=22853.84, stdev=5246.47 00:15:59.494 clat (usec): min=164, max=652, avg=332.85, stdev=85.34 00:15:59.494 lat (usec): min=184, max=675, avg=355.71, stdev=86.89 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 229], 00:15:59.494 | 30.00th=[ 281], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 371], 00:15:59.494 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 445], 00:15:59.494 | 99.00th=[ 494], 99.50th=[ 537], 99.90th=[ 611], 99.95th=[ 652], 00:15:59.494 | 99.99th=[ 652] 00:15:59.494 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:59.494 slat (usec): min=25, max=169, avg=38.89, stdev= 9.36 00:15:59.494 clat (usec): min=118, max=744, avg=263.38, stdev=65.70 00:15:59.494 lat (usec): min=151, max=779, avg=302.27, stdev=66.78 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 137], 5.00th=[ 159], 10.00th=[ 178], 20.00th=[ 219], 00:15:59.494 | 30.00th=[ 233], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 273], 00:15:59.494 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 392], 00:15:59.494 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 685], 99.95th=[ 742], 00:15:59.494 | 99.99th=[ 742] 00:15:59.494 bw ( KiB/s): min= 7352, max= 7352, per=24.46%, avg=7352.00, stdev= 0.00, samples=1 00:15:59.494 iops : min= 1838, max= 1838, avg=1838.00, stdev= 0.00, samples=1 00:15:59.494 lat (usec) : 250=34.36%, 500=65.11%, 750=0.53% 00:15:59.494 cpu : usr=1.30%, sys=7.40%, ctx=3018, majf=0, minf=9 00:15:59.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.494 issued rwts: total=1482,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.494 job1: (groupid=0, jobs=1): err= 0: pid=93153: Sat Jul 13 07:02:07 2024 00:15:59.494 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:59.494 slat (nsec): min=14070, max=80286, avg=17011.19, stdev=4578.85 00:15:59.494 clat (usec): min=164, max=2136, avg=231.12, stdev=52.27 00:15:59.494 lat (usec): min=181, max=2163, avg=248.14, stdev=52.58 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:15:59.494 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:15:59.494 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:15:59.494 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 490], 99.95th=[ 758], 00:15:59.494 | 99.99th=[ 2147] 00:15:59.494 write: IOPS=2398, BW=9594KiB/s (9825kB/s)(9604KiB/1001msec); 0 zone resets 00:15:59.494 slat (usec): min=19, max=131, avg=24.53, stdev= 6.98 00:15:59.494 clat (usec): min=113, max=1754, avg=176.84, stdev=40.57 00:15:59.494 lat (usec): min=137, max=1777, avg=201.37, stdev=41.47 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 129], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 155], 00:15:59.494 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 180], 00:15:59.494 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:15:59.494 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 297], 99.95th=[ 322], 00:15:59.494 | 99.99th=[ 1762] 00:15:59.494 bw ( KiB/s): min= 9472, max= 9472, per=31.52%, avg=9472.00, stdev= 0.00, samples=1 00:15:59.494 iops : min= 2368, max= 2368, avg=2368.00, stdev= 0.00, samples=1 00:15:59.494 lat (usec) : 250=89.91%, 500=10.02%, 1000=0.02% 00:15:59.494 lat (msec) : 2=0.02%, 4=0.02% 00:15:59.494 cpu : usr=1.90%, sys=6.70%, ctx=4450, majf=0, minf=12 00:15:59.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.494 issued rwts: total=2048,2401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.494 job2: (groupid=0, jobs=1): err= 0: pid=93154: Sat Jul 13 07:02:07 2024 00:15:59.494 read: IOPS=1202, BW=4811KiB/s (4927kB/s)(4816KiB/1001msec) 00:15:59.494 slat (nsec): min=11792, max=91606, avg=28399.09, stdev=10883.11 00:15:59.494 clat (usec): min=233, max=717, avg=380.82, stdev=45.11 00:15:59.494 lat (usec): min=259, max=730, avg=409.22, stdev=42.57 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 302], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 347], 00:15:59.494 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 388], 00:15:59.494 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 437], 95.00th=[ 457], 00:15:59.494 | 99.00th=[ 498], 99.50th=[ 545], 99.90th=[ 644], 99.95th=[ 717], 00:15:59.494 | 99.99th=[ 717] 00:15:59.494 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:59.494 slat (usec): min=12, max=131, avg=35.37, stdev= 9.63 00:15:59.494 clat (usec): min=128, max=7113, avg=289.28, stdev=217.96 00:15:59.494 lat (usec): min=176, max=7150, avg=324.65, stdev=218.34 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 169], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 235], 00:15:59.494 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 285], 00:15:59.494 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 367], 95.00th=[ 400], 00:15:59.494 | 99.00th=[ 469], 99.50th=[ 685], 99.90th=[ 3458], 99.95th=[ 7111], 00:15:59.494 | 99.99th=[ 7111] 00:15:59.494 bw ( KiB/s): min= 7344, max= 7344, per=24.44%, avg=7344.00, stdev= 0.00, samples=1 00:15:59.494 iops : min= 1836, max= 1836, avg=1836.00, stdev= 0.00, samples=1 00:15:59.494 lat (usec) : 250=18.65%, 500=80.51%, 750=0.62%, 1000=0.04% 00:15:59.494 lat (msec) : 2=0.07%, 4=0.07%, 10=0.04% 00:15:59.494 cpu : usr=1.40%, sys=7.00%, ctx=2740, majf=0, minf=11 00:15:59.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.494 issued rwts: total=1204,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.494 job3: (groupid=0, jobs=1): err= 0: pid=93155: Sat Jul 13 07:02:07 2024 00:15:59.494 read: IOPS=1886, BW=7544KiB/s (7726kB/s)(7552KiB/1001msec) 00:15:59.494 slat (nsec): min=9519, max=43717, avg=16987.99, stdev=3492.12 00:15:59.494 clat (usec): min=191, max=2623, avg=271.70, stdev=96.25 00:15:59.494 lat (usec): min=206, max=2641, avg=288.69, stdev=96.22 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:15:59.494 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:15:59.494 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 408], 95.00th=[ 445], 00:15:59.494 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 955], 99.95th=[ 2638], 00:15:59.494 | 99.99th=[ 2638] 00:15:59.494 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:59.494 slat (usec): min=19, max=170, avg=25.01, stdev= 7.11 00:15:59.494 clat (usec): min=132, max=2446, avg=193.59, stdev=58.99 00:15:59.494 lat (usec): min=153, max=2470, avg=218.59, stdev=59.89 00:15:59.494 clat percentiles (usec): 00:15:59.494 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:15:59.494 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:15:59.494 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 237], 00:15:59.494 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 709], 99.95th=[ 734], 00:15:59.494 | 99.99th=[ 2442] 00:15:59.494 bw ( KiB/s): min= 8544, max= 8544, per=28.43%, avg=8544.00, stdev= 0.00, samples=1 00:15:59.494 iops : min= 2136, max= 2136, avg=2136.00, stdev= 0.00, samples=1 00:15:59.494 lat (usec) : 250=80.08%, 500=18.85%, 750=0.91%, 1000=0.10% 00:15:59.495 lat (msec) : 4=0.05% 00:15:59.495 cpu : usr=1.60%, sys=6.00%, ctx=3936, majf=0, minf=13 00:15:59.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.495 issued rwts: total=1888,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.495 00:15:59.495 Run status group 0 (all jobs): 00:15:59.495 READ: bw=25.8MiB/s (27.1MB/s), 4811KiB/s-8184KiB/s (4927kB/s-8380kB/s), io=25.9MiB (27.1MB), run=1001-1001msec 00:15:59.495 WRITE: bw=29.3MiB/s (30.8MB/s), 6138KiB/s-9594KiB/s (6285kB/s-9825kB/s), io=29.4MiB (30.8MB), run=1001-1001msec 00:15:59.495 00:15:59.495 Disk stats (read/write): 00:15:59.495 nvme0n1: ios=1074/1380, merge=0/0, ticks=433/400, in_queue=833, util=87.47% 00:15:59.495 nvme0n2: ios=1800/2048, merge=0/0, ticks=453/386, in_queue=839, util=88.92% 00:15:59.495 nvme0n3: ios=1024/1319, merge=0/0, ticks=390/399, in_queue=789, util=88.27% 00:15:59.495 nvme0n4: ios=1556/2048, merge=0/0, ticks=386/421, in_queue=807, util=89.75% 00:15:59.495 07:02:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:59.495 [global] 00:15:59.495 thread=1 00:15:59.495 invalidate=1 00:15:59.495 rw=write 00:15:59.495 time_based=1 00:15:59.495 runtime=1 00:15:59.495 ioengine=libaio 00:15:59.495 direct=1 00:15:59.495 bs=4096 00:15:59.495 iodepth=128 00:15:59.495 norandommap=0 00:15:59.495 numjobs=1 00:15:59.495 00:15:59.495 verify_dump=1 00:15:59.495 verify_backlog=512 00:15:59.495 verify_state_save=0 00:15:59.495 do_verify=1 00:15:59.495 verify=crc32c-intel 00:15:59.495 [job0] 00:15:59.495 filename=/dev/nvme0n1 00:15:59.495 [job1] 00:15:59.495 filename=/dev/nvme0n2 00:15:59.495 [job2] 00:15:59.495 filename=/dev/nvme0n3 00:15:59.495 [job3] 00:15:59.495 filename=/dev/nvme0n4 00:15:59.495 Could not set queue depth (nvme0n1) 00:15:59.495 Could not set queue depth (nvme0n2) 00:15:59.495 Could not set queue depth (nvme0n3) 00:15:59.495 Could not set queue depth (nvme0n4) 00:15:59.495 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.495 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.495 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.495 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.495 fio-3.35 00:15:59.495 Starting 4 threads 00:16:00.873 00:16:00.873 job0: (groupid=0, jobs=1): err= 0: pid=93215: Sat Jul 13 07:02:08 2024 00:16:00.873 read: IOPS=3720, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec) 00:16:00.873 slat (usec): min=4, max=7791, avg=130.23, stdev=623.17 00:16:00.873 clat (usec): min=1499, max=23982, avg=16267.50, stdev=2499.29 00:16:00.873 lat (usec): min=4867, max=23995, avg=16397.73, stdev=2544.06 00:16:00.873 clat percentiles (usec): 00:16:00.873 | 1.00th=[ 9110], 5.00th=[11863], 10.00th=[13173], 20.00th=[15139], 00:16:00.873 | 30.00th=[15664], 40.00th=[15795], 50.00th=[16057], 60.00th=[16581], 00:16:00.873 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19268], 95.00th=[20579], 00:16:00.873 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23200], 99.95th=[23987], 00:16:00.873 | 99.99th=[23987] 00:16:00.873 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:16:00.873 slat (usec): min=11, max=6831, avg=117.34, stdev=492.19 00:16:00.873 clat (usec): min=9641, max=24178, avg=16153.28, stdev=1912.22 00:16:00.873 lat (usec): min=9668, max=24239, avg=16270.63, stdev=1963.91 00:16:00.873 clat percentiles (usec): 00:16:00.873 | 1.00th=[10683], 5.00th=[12780], 10.00th=[14222], 20.00th=[15139], 00:16:00.873 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16319], 60.00th=[16450], 00:16:00.873 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17695], 95.00th=[20055], 00:16:00.874 | 99.00th=[22676], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:16:00.874 | 99.99th=[24249] 00:16:00.874 bw ( KiB/s): min=16384, max=16384, per=35.06%, avg=16384.00, stdev= 0.00, samples=2 00:16:00.874 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:00.874 lat (msec) : 2=0.01%, 10=0.71%, 20=93.25%, 50=6.02% 00:16:00.874 cpu : usr=3.88%, sys=12.85%, ctx=541, majf=0, minf=11 00:16:00.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:00.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.874 issued rwts: total=3739,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.874 job1: (groupid=0, jobs=1): err= 0: pid=93216: Sat Jul 13 07:02:08 2024 00:16:00.874 read: IOPS=1738, BW=6952KiB/s (7119kB/s)(7008KiB/1008msec) 00:16:00.874 slat (usec): min=4, max=10420, avg=269.75, stdev=956.96 00:16:00.874 clat (usec): min=1891, max=44530, avg=31905.91, stdev=5065.41 00:16:00.874 lat (usec): min=6669, max=44550, avg=32175.66, stdev=5012.13 00:16:00.874 clat percentiles (usec): 00:16:00.874 | 1.00th=[ 7046], 5.00th=[25035], 10.00th=[27395], 20.00th=[29492], 00:16:00.874 | 30.00th=[30540], 40.00th=[31327], 50.00th=[33162], 60.00th=[33817], 00:16:00.874 | 70.00th=[34341], 80.00th=[34866], 90.00th=[36439], 95.00th=[38536], 00:16:00.874 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:16:00.874 | 99.99th=[44303] 00:16:00.874 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:16:00.874 slat (usec): min=6, max=8190, avg=250.76, stdev=824.50 00:16:00.874 clat (usec): min=22890, max=44563, avg=34261.91, stdev=3255.49 00:16:00.874 lat (usec): min=22937, max=44586, avg=34512.67, stdev=3176.50 00:16:00.874 clat percentiles (usec): 00:16:00.874 | 1.00th=[25560], 5.00th=[29492], 10.00th=[30540], 20.00th=[31851], 00:16:00.874 | 30.00th=[32375], 40.00th=[33817], 50.00th=[34341], 60.00th=[34866], 00:16:00.874 | 70.00th=[35390], 80.00th=[36439], 90.00th=[38536], 95.00th=[40633], 00:16:00.874 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:16:00.874 | 99.99th=[44303] 00:16:00.874 bw ( KiB/s): min= 8192, max= 8192, per=17.53%, avg=8192.00, stdev= 0.00, samples=2 00:16:00.874 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:00.874 lat (msec) : 2=0.03%, 10=0.50%, 20=0.84%, 50=98.63% 00:16:00.874 cpu : usr=1.89%, sys=6.65%, ctx=776, majf=0, minf=9 00:16:00.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:16:00.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.874 issued rwts: total=1752,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.874 job2: (groupid=0, jobs=1): err= 0: pid=93217: Sat Jul 13 07:02:08 2024 00:16:00.874 read: IOPS=1737, BW=6950KiB/s (7117kB/s)(6992KiB/1006msec) 00:16:00.874 slat (usec): min=4, max=8560, avg=269.01, stdev=930.29 00:16:00.874 clat (usec): min=1249, max=40734, avg=31831.87, stdev=4668.85 00:16:00.874 lat (usec): min=5662, max=41912, avg=32100.87, stdev=4608.37 00:16:00.874 clat percentiles (usec): 00:16:00.874 | 1.00th=[ 5997], 5.00th=[26346], 10.00th=[28443], 20.00th=[30278], 00:16:00.874 | 30.00th=[30802], 40.00th=[31327], 50.00th=[32900], 60.00th=[33817], 00:16:00.874 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[36963], 00:16:00.874 | 99.00th=[38536], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:16:00.874 | 99.99th=[40633] 00:16:00.874 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:16:00.874 slat (usec): min=12, max=7973, avg=252.04, stdev=782.32 00:16:00.874 clat (usec): min=23455, max=42785, avg=34306.81, stdev=2454.48 00:16:00.874 lat (usec): min=27047, max=42806, avg=34558.84, stdev=2359.12 00:16:00.874 clat percentiles (usec): 00:16:00.874 | 1.00th=[28705], 5.00th=[30540], 10.00th=[31065], 20.00th=[32113], 00:16:00.874 | 30.00th=[33424], 40.00th=[33817], 50.00th=[34341], 60.00th=[34866], 00:16:00.874 | 70.00th=[35390], 80.00th=[35914], 90.00th=[36963], 95.00th=[38011], 00:16:00.874 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:00.874 | 99.99th=[42730] 00:16:00.874 bw ( KiB/s): min= 8192, max= 8192, per=17.53%, avg=8192.00, stdev= 0.00, samples=2 00:16:00.874 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:00.874 lat (msec) : 2=0.03%, 10=0.50%, 20=0.84%, 50=98.63% 00:16:00.874 cpu : usr=1.89%, sys=6.87%, ctx=773, majf=0, minf=14 00:16:00.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:16:00.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.874 issued rwts: total=1748,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.874 job3: (groupid=0, jobs=1): err= 0: pid=93218: Sat Jul 13 07:02:08 2024 00:16:00.874 read: IOPS=3447, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1004msec) 00:16:00.874 slat (usec): min=9, max=4780, avg=143.40, stdev=687.22 00:16:00.874 clat (usec): min=547, max=22111, avg=18147.21, stdev=1980.71 00:16:00.874 lat (usec): min=4367, max=23157, avg=18290.61, stdev=1878.54 00:16:00.874 clat percentiles (usec): 00:16:00.874 | 1.00th=[ 9765], 5.00th=[15008], 10.00th=[16712], 20.00th=[17695], 00:16:00.874 | 30.00th=[18220], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:16:00.874 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19530], 95.00th=[20055], 00:16:00.874 | 99.00th=[21103], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:16:00.874 | 99.99th=[22152] 00:16:00.874 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:16:00.874 slat (usec): min=8, max=5327, avg=132.65, stdev=548.79 00:16:00.874 clat (usec): min=13188, max=22390, avg=17732.14, stdev=1956.67 00:16:00.874 lat (usec): min=13216, max=22427, avg=17864.79, stdev=1946.63 00:16:00.874 clat percentiles (usec): 00:16:00.874 | 1.00th=[13304], 5.00th=[14484], 10.00th=[15270], 20.00th=[15926], 00:16:00.874 | 30.00th=[16712], 40.00th=[16909], 50.00th=[17695], 60.00th=[18482], 00:16:00.874 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20317], 95.00th=[20579], 00:16:00.874 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:16:00.874 | 99.99th=[22414] 00:16:00.874 bw ( KiB/s): min=13696, max=14976, per=30.68%, avg=14336.00, stdev=905.10, samples=2 00:16:00.874 iops : min= 3424, max= 3744, avg=3584.00, stdev=226.27, samples=2 00:16:00.874 lat (usec) : 750=0.01% 00:16:00.874 lat (msec) : 10=0.67%, 20=91.13%, 50=8.19% 00:16:00.874 cpu : usr=3.39%, sys=11.57%, ctx=398, majf=0, minf=13 00:16:00.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:00.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.874 issued rwts: total=3461,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.874 00:16:00.874 Run status group 0 (all jobs): 00:16:00.874 READ: bw=41.5MiB/s (43.5MB/s), 6950KiB/s-14.5MiB/s (7117kB/s-15.2MB/s), io=41.8MiB (43.8MB), run=1004-1008msec 00:16:00.874 WRITE: bw=45.6MiB/s (47.9MB/s), 8127KiB/s-15.9MiB/s (8322kB/s-16.7MB/s), io=46.0MiB (48.2MB), run=1004-1008msec 00:16:00.874 00:16:00.874 Disk stats (read/write): 00:16:00.874 nvme0n1: ios=3185/3584, merge=0/0, ticks=24663/25717, in_queue=50380, util=88.47% 00:16:00.874 nvme0n2: ios=1584/1748, merge=0/0, ticks=12499/13233, in_queue=25732, util=89.57% 00:16:00.874 nvme0n3: ios=1557/1735, merge=0/0, ticks=12297/13893, in_queue=26190, util=89.59% 00:16:00.874 nvme0n4: ios=2984/3072, merge=0/0, ticks=12872/12519, in_queue=25391, util=89.52% 00:16:00.874 07:02:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:00.874 [global] 00:16:00.874 thread=1 00:16:00.874 invalidate=1 00:16:00.874 rw=randwrite 00:16:00.874 time_based=1 00:16:00.874 runtime=1 00:16:00.874 ioengine=libaio 00:16:00.874 direct=1 00:16:00.874 bs=4096 00:16:00.874 iodepth=128 00:16:00.874 norandommap=0 00:16:00.874 numjobs=1 00:16:00.874 00:16:00.874 verify_dump=1 00:16:00.874 verify_backlog=512 00:16:00.874 verify_state_save=0 00:16:00.874 do_verify=1 00:16:00.874 verify=crc32c-intel 00:16:00.874 [job0] 00:16:00.874 filename=/dev/nvme0n1 00:16:00.874 [job1] 00:16:00.874 filename=/dev/nvme0n2 00:16:00.874 [job2] 00:16:00.874 filename=/dev/nvme0n3 00:16:00.874 [job3] 00:16:00.874 filename=/dev/nvme0n4 00:16:00.874 Could not set queue depth (nvme0n1) 00:16:00.874 Could not set queue depth (nvme0n2) 00:16:00.874 Could not set queue depth (nvme0n3) 00:16:00.874 Could not set queue depth (nvme0n4) 00:16:00.874 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.874 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.874 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.874 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.874 fio-3.35 00:16:00.874 Starting 4 threads 00:16:02.252 00:16:02.252 job0: (groupid=0, jobs=1): err= 0: pid=93272: Sat Jul 13 07:02:09 2024 00:16:02.252 read: IOPS=3530, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:16:02.252 slat (usec): min=6, max=15914, avg=147.30, stdev=903.56 00:16:02.252 clat (usec): min=3989, max=34159, avg=18743.78, stdev=3646.20 00:16:02.252 lat (usec): min=7447, max=34232, avg=18891.09, stdev=3700.57 00:16:02.252 clat percentiles (usec): 00:16:02.252 | 1.00th=[12387], 5.00th=[13829], 10.00th=[14615], 20.00th=[16909], 00:16:02.252 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:16:02.252 | 70.00th=[19530], 80.00th=[20841], 90.00th=[22938], 95.00th=[25822], 00:16:02.252 | 99.00th=[31327], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:16:02.252 | 99.99th=[34341] 00:16:02.252 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:16:02.252 slat (usec): min=5, max=5185, avg=114.25, stdev=494.81 00:16:02.252 clat (usec): min=6210, max=64934, avg=16965.12, stdev=2988.83 00:16:02.252 lat (usec): min=6247, max=64950, avg=17079.37, stdev=3024.73 00:16:02.252 clat percentiles (usec): 00:16:02.252 | 1.00th=[ 7963], 5.00th=[10552], 10.00th=[13566], 20.00th=[16319], 00:16:02.252 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:16:02.252 | 70.00th=[17957], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:16:02.252 | 99.00th=[21627], 99.50th=[22938], 99.90th=[36439], 99.95th=[53216], 00:16:02.252 | 99.99th=[64750] 00:16:02.252 bw ( KiB/s): min=13416, max=15256, per=29.99%, avg=14336.00, stdev=1301.08, samples=2 00:16:02.252 iops : min= 3354, max= 3814, avg=3584.00, stdev=325.27, samples=2 00:16:02.252 lat (msec) : 4=0.01%, 10=2.44%, 20=82.36%, 50=15.15%, 100=0.03% 00:16:02.252 cpu : usr=3.66%, sys=10.18%, ctx=388, majf=0, minf=3 00:16:02.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:02.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.252 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.252 job1: (groupid=0, jobs=1): err= 0: pid=93273: Sat Jul 13 07:02:09 2024 00:16:02.252 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:16:02.252 slat (usec): min=3, max=12801, avg=236.36, stdev=1137.19 00:16:02.252 clat (usec): min=20354, max=41942, avg=29154.08, stdev=3366.65 00:16:02.252 lat (usec): min=20413, max=41958, avg=29390.44, stdev=3461.89 00:16:02.252 clat percentiles (usec): 00:16:02.252 | 1.00th=[21103], 5.00th=[23462], 10.00th=[25035], 20.00th=[27132], 00:16:02.252 | 30.00th=[27395], 40.00th=[28443], 50.00th=[28967], 60.00th=[29492], 00:16:02.252 | 70.00th=[30540], 80.00th=[31851], 90.00th=[33424], 95.00th=[35390], 00:16:02.252 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:16:02.252 | 99.99th=[41681] 00:16:02.252 write: IOPS=2147, BW=8592KiB/s (8798kB/s)(8652KiB/1007msec); 0 zone resets 00:16:02.252 slat (usec): min=5, max=13785, avg=232.05, stdev=897.37 00:16:02.252 clat (usec): min=2690, max=44920, avg=30774.56, stdev=4463.88 00:16:02.252 lat (usec): min=11433, max=44947, avg=31006.61, stdev=4515.97 00:16:02.252 clat percentiles (usec): 00:16:02.252 | 1.00th=[11863], 5.00th=[24249], 10.00th=[26346], 20.00th=[28705], 00:16:02.253 | 30.00th=[30016], 40.00th=[30802], 50.00th=[31589], 60.00th=[32113], 00:16:02.253 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[36963], 00:16:02.253 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:02.253 | 99.99th=[44827] 00:16:02.253 bw ( KiB/s): min= 8192, max= 8192, per=17.14%, avg=8192.00, stdev= 0.00, samples=2 00:16:02.253 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:02.253 lat (msec) : 4=0.02%, 20=1.33%, 50=98.65% 00:16:02.253 cpu : usr=2.39%, sys=6.16%, ctx=736, majf=0, minf=15 00:16:02.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:02.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.253 issued rwts: total=2048,2163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.253 job2: (groupid=0, jobs=1): err= 0: pid=93274: Sat Jul 13 07:02:09 2024 00:16:02.253 read: IOPS=3978, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1003msec) 00:16:02.253 slat (usec): min=7, max=8330, avg=123.69, stdev=685.59 00:16:02.253 clat (usec): min=1685, max=23956, avg=15748.64, stdev=2178.62 00:16:02.253 lat (usec): min=4478, max=23974, avg=15872.33, stdev=2242.40 00:16:02.253 clat percentiles (usec): 00:16:02.253 | 1.00th=[ 8979], 5.00th=[12387], 10.00th=[14091], 20.00th=[14746], 00:16:02.253 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:16:02.253 | 70.00th=[16057], 80.00th=[16450], 90.00th=[18220], 95.00th=[19530], 00:16:02.253 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23462], 99.95th=[23987], 00:16:02.253 | 99.99th=[23987] 00:16:02.253 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:16:02.253 slat (usec): min=11, max=7020, avg=115.34, stdev=576.58 00:16:02.253 clat (usec): min=6733, max=24455, avg=15585.88, stdev=2102.59 00:16:02.253 lat (usec): min=6763, max=24514, avg=15701.22, stdev=2164.24 00:16:02.253 clat percentiles (usec): 00:16:02.253 | 1.00th=[10159], 5.00th=[12125], 10.00th=[13042], 20.00th=[14091], 00:16:02.253 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:16:02.253 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17695], 95.00th=[18744], 00:16:02.253 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987], 00:16:02.253 | 99.99th=[24511] 00:16:02.253 bw ( KiB/s): min=16384, max=16384, per=34.27%, avg=16384.00, stdev= 0.00, samples=2 00:16:02.253 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:02.253 lat (msec) : 2=0.01%, 10=1.15%, 20=94.94%, 50=3.90% 00:16:02.253 cpu : usr=3.29%, sys=13.27%, ctx=398, majf=0, minf=10 00:16:02.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:02.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.253 issued rwts: total=3990,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.253 job3: (groupid=0, jobs=1): err= 0: pid=93275: Sat Jul 13 07:02:09 2024 00:16:02.253 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:16:02.253 slat (usec): min=4, max=13076, avg=239.69, stdev=1120.00 00:16:02.253 clat (usec): min=19166, max=45155, avg=29291.58, stdev=4019.22 00:16:02.253 lat (usec): min=19186, max=45192, avg=29531.28, stdev=4116.96 00:16:02.253 clat percentiles (usec): 00:16:02.253 | 1.00th=[20579], 5.00th=[22414], 10.00th=[24511], 20.00th=[26346], 00:16:02.253 | 30.00th=[27657], 40.00th=[28443], 50.00th=[28967], 60.00th=[29492], 00:16:02.253 | 70.00th=[30278], 80.00th=[31851], 90.00th=[35390], 95.00th=[36963], 00:16:02.253 | 99.00th=[40633], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:16:02.253 | 99.99th=[45351] 00:16:02.253 write: IOPS=2238, BW=8954KiB/s (9168kB/s)(9052KiB/1011msec); 0 zone resets 00:16:02.253 slat (usec): min=5, max=14214, avg=220.21, stdev=893.78 00:16:02.253 clat (usec): min=2816, max=44918, avg=29887.21, stdev=4936.83 00:16:02.253 lat (usec): min=10957, max=44951, avg=30107.43, stdev=5001.64 00:16:02.253 clat percentiles (usec): 00:16:02.253 | 1.00th=[11207], 5.00th=[19530], 10.00th=[23462], 20.00th=[27132], 00:16:02.253 | 30.00th=[29492], 40.00th=[30278], 50.00th=[31065], 60.00th=[31851], 00:16:02.253 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:16:02.253 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:16:02.253 | 99.99th=[44827] 00:16:02.253 bw ( KiB/s): min= 8480, max= 8600, per=17.87%, avg=8540.00, stdev=84.85, samples=2 00:16:02.253 iops : min= 2120, max= 2150, avg=2135.00, stdev=21.21, samples=2 00:16:02.253 lat (msec) : 4=0.02%, 20=3.15%, 50=96.82% 00:16:02.253 cpu : usr=2.57%, sys=6.04%, ctx=814, majf=0, minf=7 00:16:02.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:02.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.253 issued rwts: total=2048,2263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.253 00:16:02.253 Run status group 0 (all jobs): 00:16:02.253 READ: bw=45.0MiB/s (47.2MB/s), 8103KiB/s-15.5MiB/s (8297kB/s-16.3MB/s), io=45.6MiB (47.8MB), run=1003-1013msec 00:16:02.253 WRITE: bw=46.7MiB/s (48.9MB/s), 8592KiB/s-16.0MiB/s (8798kB/s-16.7MB/s), io=47.3MiB (49.6MB), run=1003-1013msec 00:16:02.253 00:16:02.253 Disk stats (read/write): 00:16:02.253 nvme0n1: ios=3045/3072, merge=0/0, ticks=34930/31254, in_queue=66184, util=88.28% 00:16:02.253 nvme0n2: ios=1585/2047, merge=0/0, ticks=21542/30093, in_queue=51635, util=89.38% 00:16:02.253 nvme0n3: ios=3415/3584, merge=0/0, ticks=25548/24741, in_queue=50289, util=90.65% 00:16:02.253 nvme0n4: ios=1599/2048, merge=0/0, ticks=23022/29208, in_queue=52230, util=89.56% 00:16:02.253 07:02:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:02.253 07:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93291 00:16:02.253 07:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:02.253 07:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:02.253 [global] 00:16:02.253 thread=1 00:16:02.253 invalidate=1 00:16:02.253 rw=read 00:16:02.253 time_based=1 00:16:02.253 runtime=10 00:16:02.253 ioengine=libaio 00:16:02.253 direct=1 00:16:02.253 bs=4096 00:16:02.253 iodepth=1 00:16:02.253 norandommap=1 00:16:02.253 numjobs=1 00:16:02.253 00:16:02.253 [job0] 00:16:02.253 filename=/dev/nvme0n1 00:16:02.253 [job1] 00:16:02.253 filename=/dev/nvme0n2 00:16:02.253 [job2] 00:16:02.253 filename=/dev/nvme0n3 00:16:02.253 [job3] 00:16:02.253 filename=/dev/nvme0n4 00:16:02.253 Could not set queue depth (nvme0n1) 00:16:02.253 Could not set queue depth (nvme0n2) 00:16:02.253 Could not set queue depth (nvme0n3) 00:16:02.253 Could not set queue depth (nvme0n4) 00:16:02.253 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.253 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.253 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.253 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.253 fio-3.35 00:16:02.253 Starting 4 threads 00:16:05.538 07:02:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:05.538 fio: pid=93334, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:05.538 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=31391744, buflen=4096 00:16:05.538 07:02:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:05.538 fio: pid=93333, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:05.538 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=55808000, buflen=4096 00:16:05.538 07:02:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.538 07:02:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:05.797 fio: pid=93331, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:05.797 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=49094656, buflen=4096 00:16:05.797 07:02:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:05.797 07:02:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:06.056 fio: pid=93332, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:06.056 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46673920, buflen=4096 00:16:06.056 00:16:06.056 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93331: Sat Jul 13 07:02:14 2024 00:16:06.056 read: IOPS=3533, BW=13.8MiB/s (14.5MB/s)(46.8MiB/3392msec) 00:16:06.056 slat (usec): min=10, max=12722, avg=19.72, stdev=211.55 00:16:06.056 clat (usec): min=58, max=7333, avg=261.48, stdev=117.77 00:16:06.056 lat (usec): min=149, max=13011, avg=281.20, stdev=241.85 00:16:06.056 clat percentiles (usec): 00:16:06.056 | 1.00th=[ 159], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 219], 00:16:06.056 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:16:06.056 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 469], 00:16:06.056 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 2835], 00:16:06.056 | 99.99th=[ 3982] 00:16:06.056 bw ( KiB/s): min= 8485, max=16928, per=29.15%, avg=14262.17, stdev=3011.62, samples=6 00:16:06.056 iops : min= 2121, max= 4232, avg=3565.50, stdev=753.00, samples=6 00:16:06.056 lat (usec) : 100=0.01%, 250=60.74%, 500=37.57%, 750=1.59%, 1000=0.01% 00:16:06.056 lat (msec) : 2=0.03%, 4=0.04%, 10=0.01% 00:16:06.056 cpu : usr=1.24%, sys=4.66%, ctx=12004, majf=0, minf=1 00:16:06.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 issued rwts: total=11987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.056 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93332: Sat Jul 13 07:02:14 2024 00:16:06.056 read: IOPS=3120, BW=12.2MiB/s (12.8MB/s)(44.5MiB/3652msec) 00:16:06.056 slat (usec): min=10, max=12291, avg=26.86, stdev=214.48 00:16:06.056 clat (usec): min=107, max=2902, avg=291.49, stdev=96.26 00:16:06.056 lat (usec): min=140, max=12543, avg=318.35, stdev=234.82 00:16:06.056 clat percentiles (usec): 00:16:06.056 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 157], 20.00th=[ 223], 00:16:06.056 | 30.00th=[ 243], 40.00th=[ 281], 50.00th=[ 310], 60.00th=[ 326], 00:16:06.056 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 404], 00:16:06.056 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 881], 99.95th=[ 2245], 00:16:06.056 | 99.99th=[ 2802] 00:16:06.056 bw ( KiB/s): min=10200, max=16557, per=25.10%, avg=12281.86, stdev=2476.28, samples=7 00:16:06.056 iops : min= 2550, max= 4139, avg=3070.29, stdev=619.05, samples=7 00:16:06.056 lat (usec) : 250=32.41%, 500=67.37%, 750=0.09%, 1000=0.05% 00:16:06.056 lat (msec) : 2=0.02%, 4=0.05% 00:16:06.056 cpu : usr=1.07%, sys=5.70%, ctx=11425, majf=0, minf=1 00:16:06.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 issued rwts: total=11396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.056 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93333: Sat Jul 13 07:02:14 2024 00:16:06.056 read: IOPS=4303, BW=16.8MiB/s (17.6MB/s)(53.2MiB/3166msec) 00:16:06.056 slat (usec): min=13, max=7709, avg=18.28, stdev=91.52 00:16:06.056 clat (usec): min=152, max=2817, avg=212.65, stdev=47.59 00:16:06.056 lat (usec): min=167, max=8052, avg=230.94, stdev=104.52 00:16:06.056 clat percentiles (usec): 00:16:06.056 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 184], 00:16:06.056 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:16:06.056 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 273], 00:16:06.056 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 429], 99.95th=[ 717], 00:16:06.056 | 99.99th=[ 2147] 00:16:06.056 bw ( KiB/s): min=16455, max=18322, per=35.61%, avg=17425.50, stdev=654.88, samples=6 00:16:06.056 iops : min= 4113, max= 4580, avg=4356.17, stdev=163.81, samples=6 00:16:06.056 lat (usec) : 250=88.51%, 500=11.41%, 750=0.03%, 1000=0.01% 00:16:06.056 lat (msec) : 2=0.01%, 4=0.01% 00:16:06.056 cpu : usr=1.14%, sys=5.91%, ctx=13628, majf=0, minf=1 00:16:06.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 issued rwts: total=13626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.056 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=93334: Sat Jul 13 07:02:14 2024 00:16:06.056 read: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(29.9MiB/2937msec) 00:16:06.056 slat (nsec): min=10562, max=76305, avg=18283.77, stdev=4357.06 00:16:06.056 clat (usec): min=162, max=4309, avg=362.71, stdev=80.41 00:16:06.056 lat (usec): min=178, max=4343, avg=381.00, stdev=80.28 00:16:06.056 clat percentiles (usec): 00:16:06.056 | 1.00th=[ 260], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 318], 00:16:06.056 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 363], 00:16:06.056 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 461], 95.00th=[ 486], 00:16:06.056 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 676], 99.95th=[ 963], 00:16:06.056 | 99.99th=[ 4293] 00:16:06.056 bw ( KiB/s): min=10192, max=11296, per=22.17%, avg=10848.00, stdev=563.76, samples=5 00:16:06.056 iops : min= 2548, max= 2824, avg=2712.00, stdev=140.94, samples=5 00:16:06.056 lat (usec) : 250=0.44%, 500=96.84%, 750=2.61%, 1000=0.05% 00:16:06.056 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:16:06.056 cpu : usr=0.72%, sys=3.95%, ctx=7665, majf=0, minf=1 00:16:06.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.056 issued rwts: total=7665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.056 00:16:06.056 Run status group 0 (all jobs): 00:16:06.056 READ: bw=47.8MiB/s (50.1MB/s), 10.2MiB/s-16.8MiB/s (10.7MB/s-17.6MB/s), io=174MiB (183MB), run=2937-3652msec 00:16:06.056 00:16:06.056 Disk stats (read/write): 00:16:06.056 nvme0n1: ios=11903/0, merge=0/0, ticks=3145/0, in_queue=3145, util=94.94% 00:16:06.056 nvme0n2: ios=11216/0, merge=0/0, ticks=3350/0, in_queue=3350, util=95.34% 00:16:06.056 nvme0n3: ios=13455/0, merge=0/0, ticks=2948/0, in_queue=2948, util=96.40% 00:16:06.056 nvme0n4: ios=7531/0, merge=0/0, ticks=2741/0, in_queue=2741, util=96.73% 00:16:06.056 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.056 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:06.315 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.315 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:06.573 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.573 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:06.832 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:06.832 07:02:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:07.091 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.091 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93291 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:07.349 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.608 nvmf hotplug test: fio failed as expected 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.608 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.867 rmmod nvme_tcp 00:16:07.867 rmmod nvme_fabrics 00:16:07.867 rmmod nvme_keyring 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 92794 ']' 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 92794 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 92794 ']' 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 92794 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92794 00:16:07.867 killing process with pid 92794 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92794' 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 92794 00:16:07.867 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 92794 00:16:08.126 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.126 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.126 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.126 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.126 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.127 07:02:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.127 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.127 07:02:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.127 07:02:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:08.127 00:16:08.127 real 0m19.645s 00:16:08.127 user 1m15.434s 00:16:08.127 sys 0m8.361s 00:16:08.127 07:02:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.127 07:02:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 ************************************ 00:16:08.127 END TEST nvmf_fio_target 00:16:08.127 ************************************ 00:16:08.127 07:02:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:08.127 07:02:16 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:08.127 07:02:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.127 07:02:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.127 07:02:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.127 ************************************ 00:16:08.127 START TEST nvmf_bdevio 00:16:08.127 ************************************ 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:08.127 * Looking for test storage... 00:16:08.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.127 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:08.386 Cannot find device "nvmf_tgt_br" 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.386 Cannot find device "nvmf_tgt_br2" 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:08.386 Cannot find device "nvmf_tgt_br" 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:08.386 Cannot find device "nvmf_tgt_br2" 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:08.386 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:08.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:16:08.645 00:16:08.645 --- 10.0.0.2 ping statistics --- 00:16:08.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.645 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:08.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:08.645 00:16:08.645 --- 10.0.0.3 ping statistics --- 00:16:08.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.645 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:08.645 00:16:08.645 --- 10.0.0.1 ping statistics --- 00:16:08.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.645 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.645 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=93663 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 93663 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 93663 ']' 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:08.646 07:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:08.646 [2024-07-13 07:02:16.611511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:08.646 [2024-07-13 07:02:16.611616] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.905 [2024-07-13 07:02:16.751240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.905 [2024-07-13 07:02:16.829274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.905 [2024-07-13 07:02:16.829328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.905 [2024-07-13 07:02:16.829355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.905 [2024-07-13 07:02:16.829362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.905 [2024-07-13 07:02:16.829369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.905 [2024-07-13 07:02:16.829528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:08.905 [2024-07-13 07:02:16.829664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:08.905 [2024-07-13 07:02:16.830415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:08.905 [2024-07-13 07:02:16.830421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.472 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.472 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:09.472 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.472 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.472 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 [2024-07-13 07:02:17.571869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 Malloc0 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 [2024-07-13 07:02:17.639649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.731 { 00:16:09.731 "params": { 00:16:09.731 "name": "Nvme$subsystem", 00:16:09.731 "trtype": "$TEST_TRANSPORT", 00:16:09.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.731 "adrfam": "ipv4", 00:16:09.731 "trsvcid": "$NVMF_PORT", 00:16:09.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.731 "hdgst": ${hdgst:-false}, 00:16:09.731 "ddgst": ${ddgst:-false} 00:16:09.731 }, 00:16:09.731 "method": "bdev_nvme_attach_controller" 00:16:09.731 } 00:16:09.731 EOF 00:16:09.731 )") 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:09.731 07:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.731 "params": { 00:16:09.731 "name": "Nvme1", 00:16:09.731 "trtype": "tcp", 00:16:09.731 "traddr": "10.0.0.2", 00:16:09.731 "adrfam": "ipv4", 00:16:09.731 "trsvcid": "4420", 00:16:09.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.731 "hdgst": false, 00:16:09.731 "ddgst": false 00:16:09.731 }, 00:16:09.731 "method": "bdev_nvme_attach_controller" 00:16:09.731 }' 00:16:09.731 [2024-07-13 07:02:17.700915] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:09.731 [2024-07-13 07:02:17.701041] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93717 ] 00:16:09.990 [2024-07-13 07:02:17.845605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.990 [2024-07-13 07:02:17.926082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.990 [2024-07-13 07:02:17.926277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.990 [2024-07-13 07:02:17.926926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.249 I/O targets: 00:16:10.249 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:10.249 00:16:10.249 00:16:10.249 CUnit - A unit testing framework for C - Version 2.1-3 00:16:10.249 http://cunit.sourceforge.net/ 00:16:10.249 00:16:10.249 00:16:10.249 Suite: bdevio tests on: Nvme1n1 00:16:10.249 Test: blockdev write read block ...passed 00:16:10.249 Test: blockdev write zeroes read block ...passed 00:16:10.249 Test: blockdev write zeroes read no split ...passed 00:16:10.249 Test: blockdev write zeroes read split ...passed 00:16:10.249 Test: blockdev write zeroes read split partial ...passed 00:16:10.249 Test: blockdev reset ...[2024-07-13 07:02:18.224551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:10.249 [2024-07-13 07:02:18.224784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dde90 (9): Bad file descriptor 00:16:10.249 [2024-07-13 07:02:18.236073] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:10.249 passed 00:16:10.249 Test: blockdev write read 8 blocks ...passed 00:16:10.249 Test: blockdev write read size > 128k ...passed 00:16:10.249 Test: blockdev write read invalid size ...passed 00:16:10.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.249 Test: blockdev write read max offset ...passed 00:16:10.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:10.508 Test: blockdev writev readv 8 blocks ...passed 00:16:10.508 Test: blockdev writev readv 30 x 1block ...passed 00:16:10.508 Test: blockdev writev readv block ...passed 00:16:10.508 Test: blockdev writev readv size > 128k ...passed 00:16:10.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:10.508 Test: blockdev comparev and writev ...[2024-07-13 07:02:18.407227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.407336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.407357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.407369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.407871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.407898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.407914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.407925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.408476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.408513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.408541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.408562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.408964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.408990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.409007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:10.508 [2024-07-13 07:02:18.409019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:10.508 passed 00:16:10.508 Test: blockdev nvme passthru rw ...passed 00:16:10.508 Test: blockdev nvme passthru vendor specific ...[2024-07-13 07:02:18.491023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.508 [2024-07-13 07:02:18.491088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.491222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.508 [2024-07-13 07:02:18.491240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.491360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.508 [2024-07-13 07:02:18.491382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:10.508 [2024-07-13 07:02:18.491498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:10.508 [2024-07-13 07:02:18.491515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:10.508 passed 00:16:10.508 Test: blockdev nvme admin passthru ...passed 00:16:10.508 Test: blockdev copy ...passed 00:16:10.508 00:16:10.508 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.508 suites 1 1 n/a 0 0 00:16:10.508 tests 23 23 23 0 0 00:16:10.508 asserts 152 152 152 0 n/a 00:16:10.508 00:16:10.508 Elapsed time = 0.876 seconds 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.767 rmmod nvme_tcp 00:16:10.767 rmmod nvme_fabrics 00:16:10.767 rmmod nvme_keyring 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 93663 ']' 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 93663 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 93663 ']' 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 93663 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.767 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93663 00:16:11.026 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:11.026 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:11.026 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93663' 00:16:11.026 killing process with pid 93663 00:16:11.026 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 93663 00:16:11.026 07:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 93663 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.026 07:02:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.288 07:02:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:11.288 00:16:11.288 real 0m3.024s 00:16:11.288 user 0m10.906s 00:16:11.288 sys 0m0.774s 00:16:11.288 07:02:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.288 07:02:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:11.288 ************************************ 00:16:11.288 END TEST nvmf_bdevio 00:16:11.288 ************************************ 00:16:11.288 07:02:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:11.288 07:02:19 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:11.288 07:02:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:11.288 07:02:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.288 07:02:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.288 ************************************ 00:16:11.288 START TEST nvmf_auth_target 00:16:11.288 ************************************ 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:11.288 * Looking for test storage... 00:16:11.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:11.288 Cannot find device "nvmf_tgt_br" 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.288 Cannot find device "nvmf_tgt_br2" 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:11.288 Cannot find device "nvmf_tgt_br" 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:11.288 Cannot find device "nvmf_tgt_br2" 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:16:11.288 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:11.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:11.546 00:16:11.546 --- 10.0.0.2 ping statistics --- 00:16:11.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.546 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:11.546 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.546 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:11.546 00:16:11.546 --- 10.0.0.3 ping statistics --- 00:16:11.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.546 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:16:11.546 00:16:11.546 --- 10.0.0.1 ping statistics --- 00:16:11.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.546 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=93893 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 93893 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 93893 ']' 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.546 07:02:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=93937 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4a7900578f47f97c0f77bc77ae6d253ca251bd965cbb5f9a 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FpN 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4a7900578f47f97c0f77bc77ae6d253ca251bd965cbb5f9a 0 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4a7900578f47f97c0f77bc77ae6d253ca251bd965cbb5f9a 0 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4a7900578f47f97c0f77bc77ae6d253ca251bd965cbb5f9a 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FpN 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FpN 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.FpN 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6bd01e3bc6b7f1feb36e9150aac9d996e3e44bd0c194594da78616df9895610c 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.F0b 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6bd01e3bc6b7f1feb36e9150aac9d996e3e44bd0c194594da78616df9895610c 3 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6bd01e3bc6b7f1feb36e9150aac9d996e3e44bd0c194594da78616df9895610c 3 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6bd01e3bc6b7f1feb36e9150aac9d996e3e44bd0c194594da78616df9895610c 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.F0b 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.F0b 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.F0b 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=65c655349805ae48293cd2418d672d8c 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.m8z 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 65c655349805ae48293cd2418d672d8c 1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 65c655349805ae48293cd2418d672d8c 1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=65c655349805ae48293cd2418d672d8c 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.m8z 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.m8z 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.m8z 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3ba4f3981c57e5b3c627b0ff697902a7b9a887042ef176a7 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vRO 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3ba4f3981c57e5b3c627b0ff697902a7b9a887042ef176a7 2 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3ba4f3981c57e5b3c627b0ff697902a7b9a887042ef176a7 2 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3ba4f3981c57e5b3c627b0ff697902a7b9a887042ef176a7 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vRO 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vRO 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.vRO 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c5cdc2ee7a09aa1083dc33ed31db55040a6725bf586eab88 00:16:12.921 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.D36 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c5cdc2ee7a09aa1083dc33ed31db55040a6725bf586eab88 2 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c5cdc2ee7a09aa1083dc33ed31db55040a6725bf586eab88 2 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c5cdc2ee7a09aa1083dc33ed31db55040a6725bf586eab88 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.D36 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.D36 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.D36 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e97988dbc15f18baec6c9875758bd9f2 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NQk 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e97988dbc15f18baec6c9875758bd9f2 1 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e97988dbc15f18baec6c9875758bd9f2 1 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e97988dbc15f18baec6c9875758bd9f2 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NQk 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NQk 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.NQk 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=956b89ecf15c7a4b5812b4ea8c0fc99c10483de6d0fb0a95db68514794fdaac9 00:16:12.922 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4KS 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 956b89ecf15c7a4b5812b4ea8c0fc99c10483de6d0fb0a95db68514794fdaac9 3 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 956b89ecf15c7a4b5812b4ea8c0fc99c10483de6d0fb0a95db68514794fdaac9 3 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=956b89ecf15c7a4b5812b4ea8c0fc99c10483de6d0fb0a95db68514794fdaac9 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:13.198 07:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4KS 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4KS 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.4KS 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 93893 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 93893 ']' 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.198 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 93937 /var/tmp/host.sock 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 93937 ']' 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.466 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FpN 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FpN 00:16:13.725 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FpN 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.F0b ]] 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F0b 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F0b 00:16:13.984 07:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F0b 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.m8z 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.m8z 00:16:14.243 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.m8z 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.vRO ]] 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vRO 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vRO 00:16:14.500 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vRO 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D36 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.D36 00:16:14.757 07:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.D36 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.NQk ]] 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQk 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQk 00:16:15.014 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQk 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4KS 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4KS 00:16:15.272 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4KS 00:16:15.529 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:15.529 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:15.529 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.529 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.529 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.529 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.869 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.870 07:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.132 00:16:16.132 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.132 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.132 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.390 { 00:16:16.390 "auth": { 00:16:16.390 "dhgroup": "null", 00:16:16.390 "digest": "sha256", 00:16:16.390 "state": "completed" 00:16:16.390 }, 00:16:16.390 "cntlid": 1, 00:16:16.390 "listen_address": { 00:16:16.390 "adrfam": "IPv4", 00:16:16.390 "traddr": "10.0.0.2", 00:16:16.390 "trsvcid": "4420", 00:16:16.390 "trtype": "TCP" 00:16:16.390 }, 00:16:16.390 "peer_address": { 00:16:16.390 "adrfam": "IPv4", 00:16:16.390 "traddr": "10.0.0.1", 00:16:16.390 "trsvcid": "40218", 00:16:16.390 "trtype": "TCP" 00:16:16.390 }, 00:16:16.390 "qid": 0, 00:16:16.390 "state": "enabled", 00:16:16.390 "thread": "nvmf_tgt_poll_group_000" 00:16:16.390 } 00:16:16.390 ]' 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.390 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.647 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.648 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.648 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.648 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.648 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.905 07:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.086 07:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.086 07:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.344 07:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.344 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.344 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.603 00:16:21.603 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.603 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.603 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.861 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.862 { 00:16:21.862 "auth": { 00:16:21.862 "dhgroup": "null", 00:16:21.862 "digest": "sha256", 00:16:21.862 "state": "completed" 00:16:21.862 }, 00:16:21.862 "cntlid": 3, 00:16:21.862 "listen_address": { 00:16:21.862 "adrfam": "IPv4", 00:16:21.862 "traddr": "10.0.0.2", 00:16:21.862 "trsvcid": "4420", 00:16:21.862 "trtype": "TCP" 00:16:21.862 }, 00:16:21.862 "peer_address": { 00:16:21.862 "adrfam": "IPv4", 00:16:21.862 "traddr": "10.0.0.1", 00:16:21.862 "trsvcid": "52606", 00:16:21.862 "trtype": "TCP" 00:16:21.862 }, 00:16:21.862 "qid": 0, 00:16:21.862 "state": "enabled", 00:16:21.862 "thread": "nvmf_tgt_poll_group_000" 00:16:21.862 } 00:16:21.862 ]' 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.862 07:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.428 07:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.994 07:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.253 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.511 00:16:23.511 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.511 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.511 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.077 { 00:16:24.077 "auth": { 00:16:24.077 "dhgroup": "null", 00:16:24.077 "digest": "sha256", 00:16:24.077 "state": "completed" 00:16:24.077 }, 00:16:24.077 "cntlid": 5, 00:16:24.077 "listen_address": { 00:16:24.077 "adrfam": "IPv4", 00:16:24.077 "traddr": "10.0.0.2", 00:16:24.077 "trsvcid": "4420", 00:16:24.077 "trtype": "TCP" 00:16:24.077 }, 00:16:24.077 "peer_address": { 00:16:24.077 "adrfam": "IPv4", 00:16:24.077 "traddr": "10.0.0.1", 00:16:24.077 "trsvcid": "52632", 00:16:24.077 "trtype": "TCP" 00:16:24.077 }, 00:16:24.077 "qid": 0, 00:16:24.077 "state": "enabled", 00:16:24.077 "thread": "nvmf_tgt_poll_group_000" 00:16:24.077 } 00:16:24.077 ]' 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:24.077 07:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.077 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.077 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.077 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.335 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.916 07:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.175 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.432 00:16:25.432 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.433 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.433 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.691 { 00:16:25.691 "auth": { 00:16:25.691 "dhgroup": "null", 00:16:25.691 "digest": "sha256", 00:16:25.691 "state": "completed" 00:16:25.691 }, 00:16:25.691 "cntlid": 7, 00:16:25.691 "listen_address": { 00:16:25.691 "adrfam": "IPv4", 00:16:25.691 "traddr": "10.0.0.2", 00:16:25.691 "trsvcid": "4420", 00:16:25.691 "trtype": "TCP" 00:16:25.691 }, 00:16:25.691 "peer_address": { 00:16:25.691 "adrfam": "IPv4", 00:16:25.691 "traddr": "10.0.0.1", 00:16:25.691 "trsvcid": "52658", 00:16:25.691 "trtype": "TCP" 00:16:25.691 }, 00:16:25.691 "qid": 0, 00:16:25.691 "state": "enabled", 00:16:25.691 "thread": "nvmf_tgt_poll_group_000" 00:16:25.691 } 00:16:25.691 ]' 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.691 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.949 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:25.949 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.949 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.949 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.949 07:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.207 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.804 07:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.077 07:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.078 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.078 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.656 00:16:27.656 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.656 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.656 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.915 { 00:16:27.915 "auth": { 00:16:27.915 "dhgroup": "ffdhe2048", 00:16:27.915 "digest": "sha256", 00:16:27.915 "state": "completed" 00:16:27.915 }, 00:16:27.915 "cntlid": 9, 00:16:27.915 "listen_address": { 00:16:27.915 "adrfam": "IPv4", 00:16:27.915 "traddr": "10.0.0.2", 00:16:27.915 "trsvcid": "4420", 00:16:27.915 "trtype": "TCP" 00:16:27.915 }, 00:16:27.915 "peer_address": { 00:16:27.915 "adrfam": "IPv4", 00:16:27.915 "traddr": "10.0.0.1", 00:16:27.915 "trsvcid": "50878", 00:16:27.915 "trtype": "TCP" 00:16:27.915 }, 00:16:27.915 "qid": 0, 00:16:27.915 "state": "enabled", 00:16:27.915 "thread": "nvmf_tgt_poll_group_000" 00:16:27.915 } 00:16:27.915 ]' 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.915 07:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.174 07:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.741 07:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.000 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.571 00:16:29.571 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.571 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.571 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.829 { 00:16:29.829 "auth": { 00:16:29.829 "dhgroup": "ffdhe2048", 00:16:29.829 "digest": "sha256", 00:16:29.829 "state": "completed" 00:16:29.829 }, 00:16:29.829 "cntlid": 11, 00:16:29.829 "listen_address": { 00:16:29.829 "adrfam": "IPv4", 00:16:29.829 "traddr": "10.0.0.2", 00:16:29.829 "trsvcid": "4420", 00:16:29.829 "trtype": "TCP" 00:16:29.829 }, 00:16:29.829 "peer_address": { 00:16:29.829 "adrfam": "IPv4", 00:16:29.829 "traddr": "10.0.0.1", 00:16:29.829 "trsvcid": "50900", 00:16:29.829 "trtype": "TCP" 00:16:29.829 }, 00:16:29.829 "qid": 0, 00:16:29.829 "state": "enabled", 00:16:29.829 "thread": "nvmf_tgt_poll_group_000" 00:16:29.829 } 00:16:29.829 ]' 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.829 07:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.087 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.653 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.221 07:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.221 00:16:31.221 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.221 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.221 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.788 { 00:16:31.788 "auth": { 00:16:31.788 "dhgroup": "ffdhe2048", 00:16:31.788 "digest": "sha256", 00:16:31.788 "state": "completed" 00:16:31.788 }, 00:16:31.788 "cntlid": 13, 00:16:31.788 "listen_address": { 00:16:31.788 "adrfam": "IPv4", 00:16:31.788 "traddr": "10.0.0.2", 00:16:31.788 "trsvcid": "4420", 00:16:31.788 "trtype": "TCP" 00:16:31.788 }, 00:16:31.788 "peer_address": { 00:16:31.788 "adrfam": "IPv4", 00:16:31.788 "traddr": "10.0.0.1", 00:16:31.788 "trsvcid": "50934", 00:16:31.788 "trtype": "TCP" 00:16:31.788 }, 00:16:31.788 "qid": 0, 00:16:31.788 "state": "enabled", 00:16:31.788 "thread": "nvmf_tgt_poll_group_000" 00:16:31.788 } 00:16:31.788 ]' 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.788 07:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.047 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:16:32.614 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.873 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:32.873 07:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.873 07:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.873 07:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.873 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.874 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.874 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.132 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.133 07:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.392 00:16:33.392 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.392 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.392 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.650 { 00:16:33.650 "auth": { 00:16:33.650 "dhgroup": "ffdhe2048", 00:16:33.650 "digest": "sha256", 00:16:33.650 "state": "completed" 00:16:33.650 }, 00:16:33.650 "cntlid": 15, 00:16:33.650 "listen_address": { 00:16:33.650 "adrfam": "IPv4", 00:16:33.650 "traddr": "10.0.0.2", 00:16:33.650 "trsvcid": "4420", 00:16:33.650 "trtype": "TCP" 00:16:33.650 }, 00:16:33.650 "peer_address": { 00:16:33.650 "adrfam": "IPv4", 00:16:33.650 "traddr": "10.0.0.1", 00:16:33.650 "trsvcid": "50970", 00:16:33.650 "trtype": "TCP" 00:16:33.650 }, 00:16:33.650 "qid": 0, 00:16:33.650 "state": "enabled", 00:16:33.650 "thread": "nvmf_tgt_poll_group_000" 00:16:33.650 } 00:16:33.650 ]' 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.650 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.909 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.909 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.909 07:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.168 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.735 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.992 07:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.993 07:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.993 07:02:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.993 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.993 07:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.557 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.557 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.557 { 00:16:35.557 "auth": { 00:16:35.557 "dhgroup": "ffdhe3072", 00:16:35.557 "digest": "sha256", 00:16:35.557 "state": "completed" 00:16:35.557 }, 00:16:35.557 "cntlid": 17, 00:16:35.557 "listen_address": { 00:16:35.557 "adrfam": "IPv4", 00:16:35.557 "traddr": "10.0.0.2", 00:16:35.557 "trsvcid": "4420", 00:16:35.557 "trtype": "TCP" 00:16:35.557 }, 00:16:35.557 "peer_address": { 00:16:35.557 "adrfam": "IPv4", 00:16:35.557 "traddr": "10.0.0.1", 00:16:35.557 "trsvcid": "51014", 00:16:35.557 "trtype": "TCP" 00:16:35.558 }, 00:16:35.558 "qid": 0, 00:16:35.558 "state": "enabled", 00:16:35.558 "thread": "nvmf_tgt_poll_group_000" 00:16:35.558 } 00:16:35.558 ]' 00:16:35.558 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.816 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.074 07:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.643 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.902 07:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.162 00:16:37.162 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.162 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.162 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.422 { 00:16:37.422 "auth": { 00:16:37.422 "dhgroup": "ffdhe3072", 00:16:37.422 "digest": "sha256", 00:16:37.422 "state": "completed" 00:16:37.422 }, 00:16:37.422 "cntlid": 19, 00:16:37.422 "listen_address": { 00:16:37.422 "adrfam": "IPv4", 00:16:37.422 "traddr": "10.0.0.2", 00:16:37.422 "trsvcid": "4420", 00:16:37.422 "trtype": "TCP" 00:16:37.422 }, 00:16:37.422 "peer_address": { 00:16:37.422 "adrfam": "IPv4", 00:16:37.422 "traddr": "10.0.0.1", 00:16:37.422 "trsvcid": "51040", 00:16:37.422 "trtype": "TCP" 00:16:37.422 }, 00:16:37.422 "qid": 0, 00:16:37.422 "state": "enabled", 00:16:37.422 "thread": "nvmf_tgt_poll_group_000" 00:16:37.422 } 00:16:37.422 ]' 00:16:37.422 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.681 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.940 07:02:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.876 07:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.877 07:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.877 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.877 07:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.135 00:16:39.135 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.135 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.135 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.394 { 00:16:39.394 "auth": { 00:16:39.394 "dhgroup": "ffdhe3072", 00:16:39.394 "digest": "sha256", 00:16:39.394 "state": "completed" 00:16:39.394 }, 00:16:39.394 "cntlid": 21, 00:16:39.394 "listen_address": { 00:16:39.394 "adrfam": "IPv4", 00:16:39.394 "traddr": "10.0.0.2", 00:16:39.394 "trsvcid": "4420", 00:16:39.394 "trtype": "TCP" 00:16:39.394 }, 00:16:39.394 "peer_address": { 00:16:39.394 "adrfam": "IPv4", 00:16:39.394 "traddr": "10.0.0.1", 00:16:39.394 "trsvcid": "51696", 00:16:39.394 "trtype": "TCP" 00:16:39.394 }, 00:16:39.394 "qid": 0, 00:16:39.394 "state": "enabled", 00:16:39.394 "thread": "nvmf_tgt_poll_group_000" 00:16:39.394 } 00:16:39.394 ]' 00:16:39.394 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.653 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.913 07:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:16:40.479 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.738 07:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.306 00:16:41.306 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.306 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.306 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.565 { 00:16:41.565 "auth": { 00:16:41.565 "dhgroup": "ffdhe3072", 00:16:41.565 "digest": "sha256", 00:16:41.565 "state": "completed" 00:16:41.565 }, 00:16:41.565 "cntlid": 23, 00:16:41.565 "listen_address": { 00:16:41.565 "adrfam": "IPv4", 00:16:41.565 "traddr": "10.0.0.2", 00:16:41.565 "trsvcid": "4420", 00:16:41.565 "trtype": "TCP" 00:16:41.565 }, 00:16:41.565 "peer_address": { 00:16:41.565 "adrfam": "IPv4", 00:16:41.565 "traddr": "10.0.0.1", 00:16:41.565 "trsvcid": "51714", 00:16:41.565 "trtype": "TCP" 00:16:41.565 }, 00:16:41.565 "qid": 0, 00:16:41.565 "state": "enabled", 00:16:41.565 "thread": "nvmf_tgt_poll_group_000" 00:16:41.565 } 00:16:41.565 ]' 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.565 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.566 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.566 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.824 07:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:16:42.445 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.734 07:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.302 00:16:43.302 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.302 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.302 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.561 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.561 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.561 07:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.561 07:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.561 07:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.561 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.561 { 00:16:43.561 "auth": { 00:16:43.561 "dhgroup": "ffdhe4096", 00:16:43.561 "digest": "sha256", 00:16:43.561 "state": "completed" 00:16:43.561 }, 00:16:43.561 "cntlid": 25, 00:16:43.561 "listen_address": { 00:16:43.561 "adrfam": "IPv4", 00:16:43.561 "traddr": "10.0.0.2", 00:16:43.561 "trsvcid": "4420", 00:16:43.561 "trtype": "TCP" 00:16:43.561 }, 00:16:43.561 "peer_address": { 00:16:43.561 "adrfam": "IPv4", 00:16:43.561 "traddr": "10.0.0.1", 00:16:43.561 "trsvcid": "51736", 00:16:43.561 "trtype": "TCP" 00:16:43.561 }, 00:16:43.561 "qid": 0, 00:16:43.562 "state": "enabled", 00:16:43.562 "thread": "nvmf_tgt_poll_group_000" 00:16:43.562 } 00:16:43.562 ]' 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.562 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.821 07:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:16:44.388 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.388 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:44.388 07:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.388 07:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.647 07:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.647 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.647 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.647 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.906 07:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.907 07:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.907 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.907 07:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.165 00:16:45.165 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.165 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.165 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.423 { 00:16:45.423 "auth": { 00:16:45.423 "dhgroup": "ffdhe4096", 00:16:45.423 "digest": "sha256", 00:16:45.423 "state": "completed" 00:16:45.423 }, 00:16:45.423 "cntlid": 27, 00:16:45.423 "listen_address": { 00:16:45.423 "adrfam": "IPv4", 00:16:45.423 "traddr": "10.0.0.2", 00:16:45.423 "trsvcid": "4420", 00:16:45.423 "trtype": "TCP" 00:16:45.423 }, 00:16:45.423 "peer_address": { 00:16:45.423 "adrfam": "IPv4", 00:16:45.423 "traddr": "10.0.0.1", 00:16:45.423 "trsvcid": "51768", 00:16:45.423 "trtype": "TCP" 00:16:45.423 }, 00:16:45.423 "qid": 0, 00:16:45.423 "state": "enabled", 00:16:45.423 "thread": "nvmf_tgt_poll_group_000" 00:16:45.423 } 00:16:45.423 ]' 00:16:45.423 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.681 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.938 07:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:16:46.504 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.505 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:46.505 07:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.505 07:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.763 07:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.329 00:16:47.329 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.329 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.329 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.587 { 00:16:47.587 "auth": { 00:16:47.587 "dhgroup": "ffdhe4096", 00:16:47.587 "digest": "sha256", 00:16:47.587 "state": "completed" 00:16:47.587 }, 00:16:47.587 "cntlid": 29, 00:16:47.587 "listen_address": { 00:16:47.587 "adrfam": "IPv4", 00:16:47.587 "traddr": "10.0.0.2", 00:16:47.587 "trsvcid": "4420", 00:16:47.587 "trtype": "TCP" 00:16:47.587 }, 00:16:47.587 "peer_address": { 00:16:47.587 "adrfam": "IPv4", 00:16:47.587 "traddr": "10.0.0.1", 00:16:47.587 "trsvcid": "51804", 00:16:47.587 "trtype": "TCP" 00:16:47.587 }, 00:16:47.587 "qid": 0, 00:16:47.587 "state": "enabled", 00:16:47.587 "thread": "nvmf_tgt_poll_group_000" 00:16:47.587 } 00:16:47.587 ]' 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.587 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.845 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.845 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.845 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.103 07:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.669 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.927 07:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.185 00:16:49.185 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.185 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.185 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.444 { 00:16:49.444 "auth": { 00:16:49.444 "dhgroup": "ffdhe4096", 00:16:49.444 "digest": "sha256", 00:16:49.444 "state": "completed" 00:16:49.444 }, 00:16:49.444 "cntlid": 31, 00:16:49.444 "listen_address": { 00:16:49.444 "adrfam": "IPv4", 00:16:49.444 "traddr": "10.0.0.2", 00:16:49.444 "trsvcid": "4420", 00:16:49.444 "trtype": "TCP" 00:16:49.444 }, 00:16:49.444 "peer_address": { 00:16:49.444 "adrfam": "IPv4", 00:16:49.444 "traddr": "10.0.0.1", 00:16:49.444 "trsvcid": "39902", 00:16:49.444 "trtype": "TCP" 00:16:49.444 }, 00:16:49.444 "qid": 0, 00:16:49.444 "state": "enabled", 00:16:49.444 "thread": "nvmf_tgt_poll_group_000" 00:16:49.444 } 00:16:49.444 ]' 00:16:49.444 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.702 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.961 07:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.528 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.788 07:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.356 00:16:51.356 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.356 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.356 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.615 { 00:16:51.615 "auth": { 00:16:51.615 "dhgroup": "ffdhe6144", 00:16:51.615 "digest": "sha256", 00:16:51.615 "state": "completed" 00:16:51.615 }, 00:16:51.615 "cntlid": 33, 00:16:51.615 "listen_address": { 00:16:51.615 "adrfam": "IPv4", 00:16:51.615 "traddr": "10.0.0.2", 00:16:51.615 "trsvcid": "4420", 00:16:51.615 "trtype": "TCP" 00:16:51.615 }, 00:16:51.615 "peer_address": { 00:16:51.615 "adrfam": "IPv4", 00:16:51.615 "traddr": "10.0.0.1", 00:16:51.615 "trsvcid": "39922", 00:16:51.615 "trtype": "TCP" 00:16:51.615 }, 00:16:51.615 "qid": 0, 00:16:51.615 "state": "enabled", 00:16:51.615 "thread": "nvmf_tgt_poll_group_000" 00:16:51.615 } 00:16:51.615 ]' 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.615 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.874 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.874 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.874 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.874 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.874 07:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.133 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.700 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.959 07:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.960 07:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.960 07:03:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.960 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.960 07:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.527 00:16:53.527 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.527 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.527 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.786 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.786 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.786 07:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.786 07:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.786 07:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.786 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.786 { 00:16:53.786 "auth": { 00:16:53.786 "dhgroup": "ffdhe6144", 00:16:53.786 "digest": "sha256", 00:16:53.786 "state": "completed" 00:16:53.786 }, 00:16:53.786 "cntlid": 35, 00:16:53.786 "listen_address": { 00:16:53.786 "adrfam": "IPv4", 00:16:53.786 "traddr": "10.0.0.2", 00:16:53.786 "trsvcid": "4420", 00:16:53.786 "trtype": "TCP" 00:16:53.786 }, 00:16:53.786 "peer_address": { 00:16:53.786 "adrfam": "IPv4", 00:16:53.786 "traddr": "10.0.0.1", 00:16:53.786 "trsvcid": "39960", 00:16:53.786 "trtype": "TCP" 00:16:53.787 }, 00:16:53.787 "qid": 0, 00:16:53.787 "state": "enabled", 00:16:53.787 "thread": "nvmf_tgt_poll_group_000" 00:16:53.787 } 00:16:53.787 ]' 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.787 07:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.046 07:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:16:54.990 07:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.991 07:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.991 07:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.249 07:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.249 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.249 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.507 00:16:55.507 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.507 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.507 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.765 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.765 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.765 07:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.765 07:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.765 07:03:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.765 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.765 { 00:16:55.765 "auth": { 00:16:55.765 "dhgroup": "ffdhe6144", 00:16:55.765 "digest": "sha256", 00:16:55.765 "state": "completed" 00:16:55.765 }, 00:16:55.765 "cntlid": 37, 00:16:55.765 "listen_address": { 00:16:55.765 "adrfam": "IPv4", 00:16:55.765 "traddr": "10.0.0.2", 00:16:55.765 "trsvcid": "4420", 00:16:55.765 "trtype": "TCP" 00:16:55.765 }, 00:16:55.765 "peer_address": { 00:16:55.765 "adrfam": "IPv4", 00:16:55.765 "traddr": "10.0.0.1", 00:16:55.765 "trsvcid": "39982", 00:16:55.765 "trtype": "TCP" 00:16:55.765 }, 00:16:55.765 "qid": 0, 00:16:55.765 "state": "enabled", 00:16:55.765 "thread": "nvmf_tgt_poll_group_000" 00:16:55.765 } 00:16:55.765 ]' 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.023 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.024 07:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.283 07:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.848 07:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.107 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.673 00:16:57.673 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.673 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.673 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.976 { 00:16:57.976 "auth": { 00:16:57.976 "dhgroup": "ffdhe6144", 00:16:57.976 "digest": "sha256", 00:16:57.976 "state": "completed" 00:16:57.976 }, 00:16:57.976 "cntlid": 39, 00:16:57.976 "listen_address": { 00:16:57.976 "adrfam": "IPv4", 00:16:57.976 "traddr": "10.0.0.2", 00:16:57.976 "trsvcid": "4420", 00:16:57.976 "trtype": "TCP" 00:16:57.976 }, 00:16:57.976 "peer_address": { 00:16:57.976 "adrfam": "IPv4", 00:16:57.976 "traddr": "10.0.0.1", 00:16:57.976 "trsvcid": "40162", 00:16:57.976 "trtype": "TCP" 00:16:57.976 }, 00:16:57.976 "qid": 0, 00:16:57.976 "state": "enabled", 00:16:57.976 "thread": "nvmf_tgt_poll_group_000" 00:16:57.976 } 00:16:57.976 ]' 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.976 07:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.976 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.976 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.234 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.234 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.234 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.491 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.057 07:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.314 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:59.314 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.314 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.315 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.879 00:16:59.879 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.879 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.879 07:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.136 { 00:17:00.136 "auth": { 00:17:00.136 "dhgroup": "ffdhe8192", 00:17:00.136 "digest": "sha256", 00:17:00.136 "state": "completed" 00:17:00.136 }, 00:17:00.136 "cntlid": 41, 00:17:00.136 "listen_address": { 00:17:00.136 "adrfam": "IPv4", 00:17:00.136 "traddr": "10.0.0.2", 00:17:00.136 "trsvcid": "4420", 00:17:00.136 "trtype": "TCP" 00:17:00.136 }, 00:17:00.136 "peer_address": { 00:17:00.136 "adrfam": "IPv4", 00:17:00.136 "traddr": "10.0.0.1", 00:17:00.136 "trsvcid": "40190", 00:17:00.136 "trtype": "TCP" 00:17:00.136 }, 00:17:00.136 "qid": 0, 00:17:00.136 "state": "enabled", 00:17:00.136 "thread": "nvmf_tgt_poll_group_000" 00:17:00.136 } 00:17:00.136 ]' 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.136 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.394 07:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:00.961 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.961 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:00.961 07:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.961 07:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 07:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.220 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.220 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.220 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.478 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.044 00:17:02.044 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.044 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.044 07:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.303 { 00:17:02.303 "auth": { 00:17:02.303 "dhgroup": "ffdhe8192", 00:17:02.303 "digest": "sha256", 00:17:02.303 "state": "completed" 00:17:02.303 }, 00:17:02.303 "cntlid": 43, 00:17:02.303 "listen_address": { 00:17:02.303 "adrfam": "IPv4", 00:17:02.303 "traddr": "10.0.0.2", 00:17:02.303 "trsvcid": "4420", 00:17:02.303 "trtype": "TCP" 00:17:02.303 }, 00:17:02.303 "peer_address": { 00:17:02.303 "adrfam": "IPv4", 00:17:02.303 "traddr": "10.0.0.1", 00:17:02.303 "trsvcid": "40216", 00:17:02.303 "trtype": "TCP" 00:17:02.303 }, 00:17:02.303 "qid": 0, 00:17:02.303 "state": "enabled", 00:17:02.303 "thread": "nvmf_tgt_poll_group_000" 00:17:02.303 } 00:17:02.303 ]' 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.303 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.562 07:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.498 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.757 07:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.325 00:17:04.325 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.325 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.325 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.584 { 00:17:04.584 "auth": { 00:17:04.584 "dhgroup": "ffdhe8192", 00:17:04.584 "digest": "sha256", 00:17:04.584 "state": "completed" 00:17:04.584 }, 00:17:04.584 "cntlid": 45, 00:17:04.584 "listen_address": { 00:17:04.584 "adrfam": "IPv4", 00:17:04.584 "traddr": "10.0.0.2", 00:17:04.584 "trsvcid": "4420", 00:17:04.584 "trtype": "TCP" 00:17:04.584 }, 00:17:04.584 "peer_address": { 00:17:04.584 "adrfam": "IPv4", 00:17:04.584 "traddr": "10.0.0.1", 00:17:04.584 "trsvcid": "40248", 00:17:04.584 "trtype": "TCP" 00:17:04.584 }, 00:17:04.584 "qid": 0, 00:17:04.584 "state": "enabled", 00:17:04.584 "thread": "nvmf_tgt_poll_group_000" 00:17:04.584 } 00:17:04.584 ]' 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.584 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.843 07:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.803 07:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.739 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.739 { 00:17:06.739 "auth": { 00:17:06.739 "dhgroup": "ffdhe8192", 00:17:06.739 "digest": "sha256", 00:17:06.739 "state": "completed" 00:17:06.739 }, 00:17:06.739 "cntlid": 47, 00:17:06.739 "listen_address": { 00:17:06.739 "adrfam": "IPv4", 00:17:06.739 "traddr": "10.0.0.2", 00:17:06.739 "trsvcid": "4420", 00:17:06.739 "trtype": "TCP" 00:17:06.739 }, 00:17:06.739 "peer_address": { 00:17:06.739 "adrfam": "IPv4", 00:17:06.739 "traddr": "10.0.0.1", 00:17:06.739 "trsvcid": "40278", 00:17:06.739 "trtype": "TCP" 00:17:06.739 }, 00:17:06.739 "qid": 0, 00:17:06.739 "state": "enabled", 00:17:06.739 "thread": "nvmf_tgt_poll_group_000" 00:17:06.739 } 00:17:06.739 ]' 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.739 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.997 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.997 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.997 07:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.255 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.822 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.080 07:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.080 07:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.080 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.080 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.339 00:17:08.339 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.339 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.339 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.599 { 00:17:08.599 "auth": { 00:17:08.599 "dhgroup": "null", 00:17:08.599 "digest": "sha384", 00:17:08.599 "state": "completed" 00:17:08.599 }, 00:17:08.599 "cntlid": 49, 00:17:08.599 "listen_address": { 00:17:08.599 "adrfam": "IPv4", 00:17:08.599 "traddr": "10.0.0.2", 00:17:08.599 "trsvcid": "4420", 00:17:08.599 "trtype": "TCP" 00:17:08.599 }, 00:17:08.599 "peer_address": { 00:17:08.599 "adrfam": "IPv4", 00:17:08.599 "traddr": "10.0.0.1", 00:17:08.599 "trsvcid": "43956", 00:17:08.599 "trtype": "TCP" 00:17:08.599 }, 00:17:08.599 "qid": 0, 00:17:08.599 "state": "enabled", 00:17:08.599 "thread": "nvmf_tgt_poll_group_000" 00:17:08.599 } 00:17:08.599 ]' 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.599 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.859 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.859 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.859 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.859 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.859 07:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.118 07:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:09.685 07:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.685 07:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:09.685 07:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.685 07:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.944 07:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.944 07:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.944 07:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.944 07:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.203 07:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.204 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.204 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.463 00:17:10.463 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.463 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.463 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.723 { 00:17:10.723 "auth": { 00:17:10.723 "dhgroup": "null", 00:17:10.723 "digest": "sha384", 00:17:10.723 "state": "completed" 00:17:10.723 }, 00:17:10.723 "cntlid": 51, 00:17:10.723 "listen_address": { 00:17:10.723 "adrfam": "IPv4", 00:17:10.723 "traddr": "10.0.0.2", 00:17:10.723 "trsvcid": "4420", 00:17:10.723 "trtype": "TCP" 00:17:10.723 }, 00:17:10.723 "peer_address": { 00:17:10.723 "adrfam": "IPv4", 00:17:10.723 "traddr": "10.0.0.1", 00:17:10.723 "trsvcid": "43986", 00:17:10.723 "trtype": "TCP" 00:17:10.723 }, 00:17:10.723 "qid": 0, 00:17:10.723 "state": "enabled", 00:17:10.723 "thread": "nvmf_tgt_poll_group_000" 00:17:10.723 } 00:17:10.723 ]' 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.723 07:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.982 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:11.550 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.809 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.068 07:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.327 00:17:12.327 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.327 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.327 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.587 { 00:17:12.587 "auth": { 00:17:12.587 "dhgroup": "null", 00:17:12.587 "digest": "sha384", 00:17:12.587 "state": "completed" 00:17:12.587 }, 00:17:12.587 "cntlid": 53, 00:17:12.587 "listen_address": { 00:17:12.587 "adrfam": "IPv4", 00:17:12.587 "traddr": "10.0.0.2", 00:17:12.587 "trsvcid": "4420", 00:17:12.587 "trtype": "TCP" 00:17:12.587 }, 00:17:12.587 "peer_address": { 00:17:12.587 "adrfam": "IPv4", 00:17:12.587 "traddr": "10.0.0.1", 00:17:12.587 "trsvcid": "43996", 00:17:12.587 "trtype": "TCP" 00:17:12.587 }, 00:17:12.587 "qid": 0, 00:17:12.587 "state": "enabled", 00:17:12.587 "thread": "nvmf_tgt_poll_group_000" 00:17:12.587 } 00:17:12.587 ]' 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.587 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.847 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:12.847 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.847 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.847 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.847 07:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.123 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.703 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.962 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:13.963 07:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.963 07:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.963 07:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.963 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.963 07:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.222 00:17:14.222 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.222 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.222 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.480 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.480 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.480 07:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.480 07:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.739 { 00:17:14.739 "auth": { 00:17:14.739 "dhgroup": "null", 00:17:14.739 "digest": "sha384", 00:17:14.739 "state": "completed" 00:17:14.739 }, 00:17:14.739 "cntlid": 55, 00:17:14.739 "listen_address": { 00:17:14.739 "adrfam": "IPv4", 00:17:14.739 "traddr": "10.0.0.2", 00:17:14.739 "trsvcid": "4420", 00:17:14.739 "trtype": "TCP" 00:17:14.739 }, 00:17:14.739 "peer_address": { 00:17:14.739 "adrfam": "IPv4", 00:17:14.739 "traddr": "10.0.0.1", 00:17:14.739 "trsvcid": "44020", 00:17:14.739 "trtype": "TCP" 00:17:14.739 }, 00:17:14.739 "qid": 0, 00:17:14.739 "state": "enabled", 00:17:14.739 "thread": "nvmf_tgt_poll_group_000" 00:17:14.739 } 00:17:14.739 ]' 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.739 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.997 07:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.561 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.818 07:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.382 00:17:16.382 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.382 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.382 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.640 { 00:17:16.640 "auth": { 00:17:16.640 "dhgroup": "ffdhe2048", 00:17:16.640 "digest": "sha384", 00:17:16.640 "state": "completed" 00:17:16.640 }, 00:17:16.640 "cntlid": 57, 00:17:16.640 "listen_address": { 00:17:16.640 "adrfam": "IPv4", 00:17:16.640 "traddr": "10.0.0.2", 00:17:16.640 "trsvcid": "4420", 00:17:16.640 "trtype": "TCP" 00:17:16.640 }, 00:17:16.640 "peer_address": { 00:17:16.640 "adrfam": "IPv4", 00:17:16.640 "traddr": "10.0.0.1", 00:17:16.640 "trsvcid": "44066", 00:17:16.640 "trtype": "TCP" 00:17:16.640 }, 00:17:16.640 "qid": 0, 00:17:16.640 "state": "enabled", 00:17:16.640 "thread": "nvmf_tgt_poll_group_000" 00:17:16.640 } 00:17:16.640 ]' 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.640 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.897 07:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.829 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.830 07:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.396 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.396 { 00:17:18.396 "auth": { 00:17:18.396 "dhgroup": "ffdhe2048", 00:17:18.396 "digest": "sha384", 00:17:18.396 "state": "completed" 00:17:18.396 }, 00:17:18.396 "cntlid": 59, 00:17:18.396 "listen_address": { 00:17:18.396 "adrfam": "IPv4", 00:17:18.396 "traddr": "10.0.0.2", 00:17:18.396 "trsvcid": "4420", 00:17:18.396 "trtype": "TCP" 00:17:18.396 }, 00:17:18.396 "peer_address": { 00:17:18.396 "adrfam": "IPv4", 00:17:18.396 "traddr": "10.0.0.1", 00:17:18.396 "trsvcid": "47080", 00:17:18.396 "trtype": "TCP" 00:17:18.396 }, 00:17:18.396 "qid": 0, 00:17:18.396 "state": "enabled", 00:17:18.396 "thread": "nvmf_tgt_poll_group_000" 00:17:18.396 } 00:17:18.396 ]' 00:17:18.396 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.654 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.912 07:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.478 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.736 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.993 00:17:19.993 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.993 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.993 07:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.250 { 00:17:20.250 "auth": { 00:17:20.250 "dhgroup": "ffdhe2048", 00:17:20.250 "digest": "sha384", 00:17:20.250 "state": "completed" 00:17:20.250 }, 00:17:20.250 "cntlid": 61, 00:17:20.250 "listen_address": { 00:17:20.250 "adrfam": "IPv4", 00:17:20.250 "traddr": "10.0.0.2", 00:17:20.250 "trsvcid": "4420", 00:17:20.250 "trtype": "TCP" 00:17:20.250 }, 00:17:20.250 "peer_address": { 00:17:20.250 "adrfam": "IPv4", 00:17:20.250 "traddr": "10.0.0.1", 00:17:20.250 "trsvcid": "47106", 00:17:20.250 "trtype": "TCP" 00:17:20.250 }, 00:17:20.250 "qid": 0, 00:17:20.250 "state": "enabled", 00:17:20.250 "thread": "nvmf_tgt_poll_group_000" 00:17:20.250 } 00:17:20.250 ]' 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.250 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.507 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.507 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.507 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.765 07:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.331 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.589 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.847 00:17:21.847 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.847 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.847 07:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.106 { 00:17:22.106 "auth": { 00:17:22.106 "dhgroup": "ffdhe2048", 00:17:22.106 "digest": "sha384", 00:17:22.106 "state": "completed" 00:17:22.106 }, 00:17:22.106 "cntlid": 63, 00:17:22.106 "listen_address": { 00:17:22.106 "adrfam": "IPv4", 00:17:22.106 "traddr": "10.0.0.2", 00:17:22.106 "trsvcid": "4420", 00:17:22.106 "trtype": "TCP" 00:17:22.106 }, 00:17:22.106 "peer_address": { 00:17:22.106 "adrfam": "IPv4", 00:17:22.106 "traddr": "10.0.0.1", 00:17:22.106 "trsvcid": "47138", 00:17:22.106 "trtype": "TCP" 00:17:22.106 }, 00:17:22.106 "qid": 0, 00:17:22.106 "state": "enabled", 00:17:22.106 "thread": "nvmf_tgt_poll_group_000" 00:17:22.106 } 00:17:22.106 ]' 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.106 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.364 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.364 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.364 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.364 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.364 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.622 07:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.188 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.447 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.705 00:17:23.705 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.705 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.705 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.963 { 00:17:23.963 "auth": { 00:17:23.963 "dhgroup": "ffdhe3072", 00:17:23.963 "digest": "sha384", 00:17:23.963 "state": "completed" 00:17:23.963 }, 00:17:23.963 "cntlid": 65, 00:17:23.963 "listen_address": { 00:17:23.963 "adrfam": "IPv4", 00:17:23.963 "traddr": "10.0.0.2", 00:17:23.963 "trsvcid": "4420", 00:17:23.963 "trtype": "TCP" 00:17:23.963 }, 00:17:23.963 "peer_address": { 00:17:23.963 "adrfam": "IPv4", 00:17:23.963 "traddr": "10.0.0.1", 00:17:23.963 "trsvcid": "47170", 00:17:23.963 "trtype": "TCP" 00:17:23.963 }, 00:17:23.963 "qid": 0, 00:17:23.963 "state": "enabled", 00:17:23.963 "thread": "nvmf_tgt_poll_group_000" 00:17:23.963 } 00:17:23.963 ]' 00:17:23.963 07:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.963 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.963 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.221 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.221 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.221 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.221 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.221 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.479 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.044 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.045 07:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.302 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.561 00:17:25.561 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.561 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.561 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.820 { 00:17:25.820 "auth": { 00:17:25.820 "dhgroup": "ffdhe3072", 00:17:25.820 "digest": "sha384", 00:17:25.820 "state": "completed" 00:17:25.820 }, 00:17:25.820 "cntlid": 67, 00:17:25.820 "listen_address": { 00:17:25.820 "adrfam": "IPv4", 00:17:25.820 "traddr": "10.0.0.2", 00:17:25.820 "trsvcid": "4420", 00:17:25.820 "trtype": "TCP" 00:17:25.820 }, 00:17:25.820 "peer_address": { 00:17:25.820 "adrfam": "IPv4", 00:17:25.820 "traddr": "10.0.0.1", 00:17:25.820 "trsvcid": "47204", 00:17:25.820 "trtype": "TCP" 00:17:25.820 }, 00:17:25.820 "qid": 0, 00:17:25.820 "state": "enabled", 00:17:25.820 "thread": "nvmf_tgt_poll_group_000" 00:17:25.820 } 00:17:25.820 ]' 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.820 07:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.162 07:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:26.728 07:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.728 07:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:26.728 07:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.728 07:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.728 07:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.729 07:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.729 07:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.729 07:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.987 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.552 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.552 07:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.810 { 00:17:27.810 "auth": { 00:17:27.810 "dhgroup": "ffdhe3072", 00:17:27.810 "digest": "sha384", 00:17:27.810 "state": "completed" 00:17:27.810 }, 00:17:27.810 "cntlid": 69, 00:17:27.810 "listen_address": { 00:17:27.810 "adrfam": "IPv4", 00:17:27.810 "traddr": "10.0.0.2", 00:17:27.810 "trsvcid": "4420", 00:17:27.810 "trtype": "TCP" 00:17:27.810 }, 00:17:27.810 "peer_address": { 00:17:27.810 "adrfam": "IPv4", 00:17:27.810 "traddr": "10.0.0.1", 00:17:27.810 "trsvcid": "43070", 00:17:27.810 "trtype": "TCP" 00:17:27.810 }, 00:17:27.810 "qid": 0, 00:17:27.810 "state": "enabled", 00:17:27.810 "thread": "nvmf_tgt_poll_group_000" 00:17:27.810 } 00:17:27.810 ]' 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.810 07:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.068 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.634 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.893 07:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.153 00:17:29.153 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.153 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.153 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.412 { 00:17:29.412 "auth": { 00:17:29.412 "dhgroup": "ffdhe3072", 00:17:29.412 "digest": "sha384", 00:17:29.412 "state": "completed" 00:17:29.412 }, 00:17:29.412 "cntlid": 71, 00:17:29.412 "listen_address": { 00:17:29.412 "adrfam": "IPv4", 00:17:29.412 "traddr": "10.0.0.2", 00:17:29.412 "trsvcid": "4420", 00:17:29.412 "trtype": "TCP" 00:17:29.412 }, 00:17:29.412 "peer_address": { 00:17:29.412 "adrfam": "IPv4", 00:17:29.412 "traddr": "10.0.0.1", 00:17:29.412 "trsvcid": "43104", 00:17:29.412 "trtype": "TCP" 00:17:29.412 }, 00:17:29.412 "qid": 0, 00:17:29.412 "state": "enabled", 00:17:29.412 "thread": "nvmf_tgt_poll_group_000" 00:17:29.412 } 00:17:29.412 ]' 00:17:29.412 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.671 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.929 07:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.495 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.754 07:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.013 00:17:31.013 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.013 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.013 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.271 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.271 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.272 07:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.272 07:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.272 07:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.272 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.272 { 00:17:31.272 "auth": { 00:17:31.272 "dhgroup": "ffdhe4096", 00:17:31.272 "digest": "sha384", 00:17:31.272 "state": "completed" 00:17:31.272 }, 00:17:31.272 "cntlid": 73, 00:17:31.272 "listen_address": { 00:17:31.272 "adrfam": "IPv4", 00:17:31.272 "traddr": "10.0.0.2", 00:17:31.272 "trsvcid": "4420", 00:17:31.272 "trtype": "TCP" 00:17:31.272 }, 00:17:31.272 "peer_address": { 00:17:31.272 "adrfam": "IPv4", 00:17:31.272 "traddr": "10.0.0.1", 00:17:31.272 "trsvcid": "43130", 00:17:31.272 "trtype": "TCP" 00:17:31.272 }, 00:17:31.272 "qid": 0, 00:17:31.272 "state": "enabled", 00:17:31.272 "thread": "nvmf_tgt_poll_group_000" 00:17:31.272 } 00:17:31.272 ]' 00:17:31.272 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.530 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.788 07:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.353 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.611 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.178 00:17:33.178 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.178 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.178 07:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.178 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.178 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.178 07:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.178 07:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.178 07:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.178 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.178 { 00:17:33.178 "auth": { 00:17:33.178 "dhgroup": "ffdhe4096", 00:17:33.178 "digest": "sha384", 00:17:33.178 "state": "completed" 00:17:33.178 }, 00:17:33.178 "cntlid": 75, 00:17:33.178 "listen_address": { 00:17:33.179 "adrfam": "IPv4", 00:17:33.179 "traddr": "10.0.0.2", 00:17:33.179 "trsvcid": "4420", 00:17:33.179 "trtype": "TCP" 00:17:33.179 }, 00:17:33.179 "peer_address": { 00:17:33.179 "adrfam": "IPv4", 00:17:33.179 "traddr": "10.0.0.1", 00:17:33.179 "trsvcid": "43156", 00:17:33.179 "trtype": "TCP" 00:17:33.179 }, 00:17:33.179 "qid": 0, 00:17:33.179 "state": "enabled", 00:17:33.179 "thread": "nvmf_tgt_poll_group_000" 00:17:33.179 } 00:17:33.179 ]' 00:17:33.179 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.437 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.695 07:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.631 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.890 00:17:35.148 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.148 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.148 07:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.406 { 00:17:35.406 "auth": { 00:17:35.406 "dhgroup": "ffdhe4096", 00:17:35.406 "digest": "sha384", 00:17:35.406 "state": "completed" 00:17:35.406 }, 00:17:35.406 "cntlid": 77, 00:17:35.406 "listen_address": { 00:17:35.406 "adrfam": "IPv4", 00:17:35.406 "traddr": "10.0.0.2", 00:17:35.406 "trsvcid": "4420", 00:17:35.406 "trtype": "TCP" 00:17:35.406 }, 00:17:35.406 "peer_address": { 00:17:35.406 "adrfam": "IPv4", 00:17:35.406 "traddr": "10.0.0.1", 00:17:35.406 "trsvcid": "43184", 00:17:35.406 "trtype": "TCP" 00:17:35.406 }, 00:17:35.406 "qid": 0, 00:17:35.406 "state": "enabled", 00:17:35.406 "thread": "nvmf_tgt_poll_group_000" 00:17:35.406 } 00:17:35.406 ]' 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.406 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.664 07:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.598 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.855 00:17:37.112 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.112 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.112 07:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.368 { 00:17:37.368 "auth": { 00:17:37.368 "dhgroup": "ffdhe4096", 00:17:37.368 "digest": "sha384", 00:17:37.368 "state": "completed" 00:17:37.368 }, 00:17:37.368 "cntlid": 79, 00:17:37.368 "listen_address": { 00:17:37.368 "adrfam": "IPv4", 00:17:37.368 "traddr": "10.0.0.2", 00:17:37.368 "trsvcid": "4420", 00:17:37.368 "trtype": "TCP" 00:17:37.368 }, 00:17:37.368 "peer_address": { 00:17:37.368 "adrfam": "IPv4", 00:17:37.368 "traddr": "10.0.0.1", 00:17:37.368 "trsvcid": "43214", 00:17:37.368 "trtype": "TCP" 00:17:37.368 }, 00:17:37.368 "qid": 0, 00:17:37.368 "state": "enabled", 00:17:37.368 "thread": "nvmf_tgt_poll_group_000" 00:17:37.368 } 00:17:37.368 ]' 00:17:37.368 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.369 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.624 07:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.553 07:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.116 00:17:39.116 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.116 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.116 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.373 { 00:17:39.373 "auth": { 00:17:39.373 "dhgroup": "ffdhe6144", 00:17:39.373 "digest": "sha384", 00:17:39.373 "state": "completed" 00:17:39.373 }, 00:17:39.373 "cntlid": 81, 00:17:39.373 "listen_address": { 00:17:39.373 "adrfam": "IPv4", 00:17:39.373 "traddr": "10.0.0.2", 00:17:39.373 "trsvcid": "4420", 00:17:39.373 "trtype": "TCP" 00:17:39.373 }, 00:17:39.373 "peer_address": { 00:17:39.373 "adrfam": "IPv4", 00:17:39.373 "traddr": "10.0.0.1", 00:17:39.373 "trsvcid": "46282", 00:17:39.373 "trtype": "TCP" 00:17:39.373 }, 00:17:39.373 "qid": 0, 00:17:39.373 "state": "enabled", 00:17:39.373 "thread": "nvmf_tgt_poll_group_000" 00:17:39.373 } 00:17:39.373 ]' 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.373 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.631 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.631 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.631 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.631 07:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:40.565 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.566 07:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.132 00:17:41.132 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.132 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.132 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.392 { 00:17:41.392 "auth": { 00:17:41.392 "dhgroup": "ffdhe6144", 00:17:41.392 "digest": "sha384", 00:17:41.392 "state": "completed" 00:17:41.392 }, 00:17:41.392 "cntlid": 83, 00:17:41.392 "listen_address": { 00:17:41.392 "adrfam": "IPv4", 00:17:41.392 "traddr": "10.0.0.2", 00:17:41.392 "trsvcid": "4420", 00:17:41.392 "trtype": "TCP" 00:17:41.392 }, 00:17:41.392 "peer_address": { 00:17:41.392 "adrfam": "IPv4", 00:17:41.392 "traddr": "10.0.0.1", 00:17:41.392 "trsvcid": "46310", 00:17:41.392 "trtype": "TCP" 00:17:41.392 }, 00:17:41.392 "qid": 0, 00:17:41.392 "state": "enabled", 00:17:41.392 "thread": "nvmf_tgt_poll_group_000" 00:17:41.392 } 00:17:41.392 ]' 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.392 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.957 07:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:42.524 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.524 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:42.524 07:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.524 07:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.524 07:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.524 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.525 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.525 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.805 07:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.063 00:17:43.063 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.063 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.063 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.321 { 00:17:43.321 "auth": { 00:17:43.321 "dhgroup": "ffdhe6144", 00:17:43.321 "digest": "sha384", 00:17:43.321 "state": "completed" 00:17:43.321 }, 00:17:43.321 "cntlid": 85, 00:17:43.321 "listen_address": { 00:17:43.321 "adrfam": "IPv4", 00:17:43.321 "traddr": "10.0.0.2", 00:17:43.321 "trsvcid": "4420", 00:17:43.321 "trtype": "TCP" 00:17:43.321 }, 00:17:43.321 "peer_address": { 00:17:43.321 "adrfam": "IPv4", 00:17:43.321 "traddr": "10.0.0.1", 00:17:43.321 "trsvcid": "46346", 00:17:43.321 "trtype": "TCP" 00:17:43.321 }, 00:17:43.321 "qid": 0, 00:17:43.321 "state": "enabled", 00:17:43.321 "thread": "nvmf_tgt_poll_group_000" 00:17:43.321 } 00:17:43.321 ]' 00:17:43.321 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.580 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.838 07:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.403 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.968 07:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.226 00:17:45.226 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.226 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.226 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.483 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.483 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.483 07:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.483 07:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.483 07:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.483 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.483 { 00:17:45.483 "auth": { 00:17:45.483 "dhgroup": "ffdhe6144", 00:17:45.483 "digest": "sha384", 00:17:45.483 "state": "completed" 00:17:45.483 }, 00:17:45.483 "cntlid": 87, 00:17:45.483 "listen_address": { 00:17:45.483 "adrfam": "IPv4", 00:17:45.483 "traddr": "10.0.0.2", 00:17:45.483 "trsvcid": "4420", 00:17:45.483 "trtype": "TCP" 00:17:45.483 }, 00:17:45.483 "peer_address": { 00:17:45.483 "adrfam": "IPv4", 00:17:45.483 "traddr": "10.0.0.1", 00:17:45.483 "trsvcid": "46368", 00:17:45.483 "trtype": "TCP" 00:17:45.483 }, 00:17:45.483 "qid": 0, 00:17:45.483 "state": "enabled", 00:17:45.483 "thread": "nvmf_tgt_poll_group_000" 00:17:45.483 } 00:17:45.483 ]' 00:17:45.484 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.484 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.484 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.484 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.484 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.742 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.742 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.742 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.999 07:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.565 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.823 07:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.389 00:17:47.389 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.389 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.389 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.647 { 00:17:47.647 "auth": { 00:17:47.647 "dhgroup": "ffdhe8192", 00:17:47.647 "digest": "sha384", 00:17:47.647 "state": "completed" 00:17:47.647 }, 00:17:47.647 "cntlid": 89, 00:17:47.647 "listen_address": { 00:17:47.647 "adrfam": "IPv4", 00:17:47.647 "traddr": "10.0.0.2", 00:17:47.647 "trsvcid": "4420", 00:17:47.647 "trtype": "TCP" 00:17:47.647 }, 00:17:47.647 "peer_address": { 00:17:47.647 "adrfam": "IPv4", 00:17:47.647 "traddr": "10.0.0.1", 00:17:47.647 "trsvcid": "46400", 00:17:47.647 "trtype": "TCP" 00:17:47.647 }, 00:17:47.647 "qid": 0, 00:17:47.647 "state": "enabled", 00:17:47.647 "thread": "nvmf_tgt_poll_group_000" 00:17:47.647 } 00:17:47.647 ]' 00:17:47.647 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.905 07:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.163 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.729 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.988 07:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.554 00:17:49.554 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.555 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.555 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.813 { 00:17:49.813 "auth": { 00:17:49.813 "dhgroup": "ffdhe8192", 00:17:49.813 "digest": "sha384", 00:17:49.813 "state": "completed" 00:17:49.813 }, 00:17:49.813 "cntlid": 91, 00:17:49.813 "listen_address": { 00:17:49.813 "adrfam": "IPv4", 00:17:49.813 "traddr": "10.0.0.2", 00:17:49.813 "trsvcid": "4420", 00:17:49.813 "trtype": "TCP" 00:17:49.813 }, 00:17:49.813 "peer_address": { 00:17:49.813 "adrfam": "IPv4", 00:17:49.813 "traddr": "10.0.0.1", 00:17:49.813 "trsvcid": "33262", 00:17:49.813 "trtype": "TCP" 00:17:49.813 }, 00:17:49.813 "qid": 0, 00:17:49.813 "state": "enabled", 00:17:49.813 "thread": "nvmf_tgt_poll_group_000" 00:17:49.813 } 00:17:49.813 ]' 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.813 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.072 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.072 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.072 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.072 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.072 07:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.330 07:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:50.896 07:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.896 07:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:51.154 07:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.154 07:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.154 07:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.154 07:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.154 07:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.154 07:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.412 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.979 00:17:51.979 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.979 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.979 07:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.238 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.239 { 00:17:52.239 "auth": { 00:17:52.239 "dhgroup": "ffdhe8192", 00:17:52.239 "digest": "sha384", 00:17:52.239 "state": "completed" 00:17:52.239 }, 00:17:52.239 "cntlid": 93, 00:17:52.239 "listen_address": { 00:17:52.239 "adrfam": "IPv4", 00:17:52.239 "traddr": "10.0.0.2", 00:17:52.239 "trsvcid": "4420", 00:17:52.239 "trtype": "TCP" 00:17:52.239 }, 00:17:52.239 "peer_address": { 00:17:52.239 "adrfam": "IPv4", 00:17:52.239 "traddr": "10.0.0.1", 00:17:52.239 "trsvcid": "33294", 00:17:52.239 "trtype": "TCP" 00:17:52.239 }, 00:17:52.239 "qid": 0, 00:17:52.239 "state": "enabled", 00:17:52.239 "thread": "nvmf_tgt_poll_group_000" 00:17:52.239 } 00:17:52.239 ]' 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.239 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.497 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.497 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.497 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.756 07:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.322 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.581 07:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.144 00:17:54.144 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.144 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.144 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.402 { 00:17:54.402 "auth": { 00:17:54.402 "dhgroup": "ffdhe8192", 00:17:54.402 "digest": "sha384", 00:17:54.402 "state": "completed" 00:17:54.402 }, 00:17:54.402 "cntlid": 95, 00:17:54.402 "listen_address": { 00:17:54.402 "adrfam": "IPv4", 00:17:54.402 "traddr": "10.0.0.2", 00:17:54.402 "trsvcid": "4420", 00:17:54.402 "trtype": "TCP" 00:17:54.402 }, 00:17:54.402 "peer_address": { 00:17:54.402 "adrfam": "IPv4", 00:17:54.402 "traddr": "10.0.0.1", 00:17:54.402 "trsvcid": "33330", 00:17:54.402 "trtype": "TCP" 00:17:54.402 }, 00:17:54.402 "qid": 0, 00:17:54.402 "state": "enabled", 00:17:54.402 "thread": "nvmf_tgt_poll_group_000" 00:17:54.402 } 00:17:54.402 ]' 00:17:54.402 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.660 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.660 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.661 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.661 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.661 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.661 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.661 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.919 07:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.853 07:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.431 00:17:56.431 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.431 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.431 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.718 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.718 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.718 07:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.718 07:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.718 07:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.718 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.718 { 00:17:56.718 "auth": { 00:17:56.718 "dhgroup": "null", 00:17:56.719 "digest": "sha512", 00:17:56.719 "state": "completed" 00:17:56.719 }, 00:17:56.719 "cntlid": 97, 00:17:56.719 "listen_address": { 00:17:56.719 "adrfam": "IPv4", 00:17:56.719 "traddr": "10.0.0.2", 00:17:56.719 "trsvcid": "4420", 00:17:56.719 "trtype": "TCP" 00:17:56.719 }, 00:17:56.719 "peer_address": { 00:17:56.719 "adrfam": "IPv4", 00:17:56.719 "traddr": "10.0.0.1", 00:17:56.719 "trsvcid": "33360", 00:17:56.719 "trtype": "TCP" 00:17:56.719 }, 00:17:56.719 "qid": 0, 00:17:56.719 "state": "enabled", 00:17:56.719 "thread": "nvmf_tgt_poll_group_000" 00:17:56.719 } 00:17:56.719 ]' 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.719 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.977 07:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.544 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.804 07:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.370 00:17:58.370 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.370 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.370 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.629 { 00:17:58.629 "auth": { 00:17:58.629 "dhgroup": "null", 00:17:58.629 "digest": "sha512", 00:17:58.629 "state": "completed" 00:17:58.629 }, 00:17:58.629 "cntlid": 99, 00:17:58.629 "listen_address": { 00:17:58.629 "adrfam": "IPv4", 00:17:58.629 "traddr": "10.0.0.2", 00:17:58.629 "trsvcid": "4420", 00:17:58.629 "trtype": "TCP" 00:17:58.629 }, 00:17:58.629 "peer_address": { 00:17:58.629 "adrfam": "IPv4", 00:17:58.629 "traddr": "10.0.0.1", 00:17:58.629 "trsvcid": "41340", 00:17:58.629 "trtype": "TCP" 00:17:58.629 }, 00:17:58.629 "qid": 0, 00:17:58.629 "state": "enabled", 00:17:58.629 "thread": "nvmf_tgt_poll_group_000" 00:17:58.629 } 00:17:58.629 ]' 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.629 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.889 07:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:17:59.456 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.457 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.716 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.975 00:17:59.975 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.975 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.975 07:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.234 { 00:18:00.234 "auth": { 00:18:00.234 "dhgroup": "null", 00:18:00.234 "digest": "sha512", 00:18:00.234 "state": "completed" 00:18:00.234 }, 00:18:00.234 "cntlid": 101, 00:18:00.234 "listen_address": { 00:18:00.234 "adrfam": "IPv4", 00:18:00.234 "traddr": "10.0.0.2", 00:18:00.234 "trsvcid": "4420", 00:18:00.234 "trtype": "TCP" 00:18:00.234 }, 00:18:00.234 "peer_address": { 00:18:00.234 "adrfam": "IPv4", 00:18:00.234 "traddr": "10.0.0.1", 00:18:00.234 "trsvcid": "41362", 00:18:00.234 "trtype": "TCP" 00:18:00.234 }, 00:18:00.234 "qid": 0, 00:18:00.234 "state": "enabled", 00:18:00.234 "thread": "nvmf_tgt_poll_group_000" 00:18:00.234 } 00:18:00.234 ]' 00:18:00.234 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.492 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.751 07:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.317 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.575 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.832 00:18:01.832 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.832 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.832 07:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.090 { 00:18:02.090 "auth": { 00:18:02.090 "dhgroup": "null", 00:18:02.090 "digest": "sha512", 00:18:02.090 "state": "completed" 00:18:02.090 }, 00:18:02.090 "cntlid": 103, 00:18:02.090 "listen_address": { 00:18:02.090 "adrfam": "IPv4", 00:18:02.090 "traddr": "10.0.0.2", 00:18:02.090 "trsvcid": "4420", 00:18:02.090 "trtype": "TCP" 00:18:02.090 }, 00:18:02.090 "peer_address": { 00:18:02.090 "adrfam": "IPv4", 00:18:02.090 "traddr": "10.0.0.1", 00:18:02.090 "trsvcid": "41384", 00:18:02.090 "trtype": "TCP" 00:18:02.090 }, 00:18:02.090 "qid": 0, 00:18:02.090 "state": "enabled", 00:18:02.090 "thread": "nvmf_tgt_poll_group_000" 00:18:02.090 } 00:18:02.090 ]' 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.090 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.091 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.091 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.091 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.349 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.349 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.349 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.607 07:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.173 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.431 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.690 00:18:03.948 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.948 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.948 07:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.207 { 00:18:04.207 "auth": { 00:18:04.207 "dhgroup": "ffdhe2048", 00:18:04.207 "digest": "sha512", 00:18:04.207 "state": "completed" 00:18:04.207 }, 00:18:04.207 "cntlid": 105, 00:18:04.207 "listen_address": { 00:18:04.207 "adrfam": "IPv4", 00:18:04.207 "traddr": "10.0.0.2", 00:18:04.207 "trsvcid": "4420", 00:18:04.207 "trtype": "TCP" 00:18:04.207 }, 00:18:04.207 "peer_address": { 00:18:04.207 "adrfam": "IPv4", 00:18:04.207 "traddr": "10.0.0.1", 00:18:04.207 "trsvcid": "41414", 00:18:04.207 "trtype": "TCP" 00:18:04.207 }, 00:18:04.207 "qid": 0, 00:18:04.207 "state": "enabled", 00:18:04.207 "thread": "nvmf_tgt_poll_group_000" 00:18:04.207 } 00:18:04.207 ]' 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.207 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.465 07:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.031 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.288 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.289 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.289 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.289 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.547 00:18:05.547 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.547 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.547 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.805 { 00:18:05.805 "auth": { 00:18:05.805 "dhgroup": "ffdhe2048", 00:18:05.805 "digest": "sha512", 00:18:05.805 "state": "completed" 00:18:05.805 }, 00:18:05.805 "cntlid": 107, 00:18:05.805 "listen_address": { 00:18:05.805 "adrfam": "IPv4", 00:18:05.805 "traddr": "10.0.0.2", 00:18:05.805 "trsvcid": "4420", 00:18:05.805 "trtype": "TCP" 00:18:05.805 }, 00:18:05.805 "peer_address": { 00:18:05.805 "adrfam": "IPv4", 00:18:05.805 "traddr": "10.0.0.1", 00:18:05.805 "trsvcid": "41438", 00:18:05.805 "trtype": "TCP" 00:18:05.805 }, 00:18:05.805 "qid": 0, 00:18:05.805 "state": "enabled", 00:18:05.805 "thread": "nvmf_tgt_poll_group_000" 00:18:05.805 } 00:18:05.805 ]' 00:18:05.805 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.062 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.062 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.062 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.062 07:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.062 07:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.062 07:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.062 07:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.319 07:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.252 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.510 00:18:07.768 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.768 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.768 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.027 { 00:18:08.027 "auth": { 00:18:08.027 "dhgroup": "ffdhe2048", 00:18:08.027 "digest": "sha512", 00:18:08.027 "state": "completed" 00:18:08.027 }, 00:18:08.027 "cntlid": 109, 00:18:08.027 "listen_address": { 00:18:08.027 "adrfam": "IPv4", 00:18:08.027 "traddr": "10.0.0.2", 00:18:08.027 "trsvcid": "4420", 00:18:08.027 "trtype": "TCP" 00:18:08.027 }, 00:18:08.027 "peer_address": { 00:18:08.027 "adrfam": "IPv4", 00:18:08.027 "traddr": "10.0.0.1", 00:18:08.027 "trsvcid": "51128", 00:18:08.027 "trtype": "TCP" 00:18:08.027 }, 00:18:08.027 "qid": 0, 00:18:08.027 "state": "enabled", 00:18:08.027 "thread": "nvmf_tgt_poll_group_000" 00:18:08.027 } 00:18:08.027 ]' 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.027 07:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.027 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.027 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.027 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.027 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.027 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.285 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:18:08.852 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.852 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:08.852 07:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.852 07:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.112 07:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.112 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.112 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.112 07:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.371 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.659 00:18:09.659 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.659 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.659 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.918 { 00:18:09.918 "auth": { 00:18:09.918 "dhgroup": "ffdhe2048", 00:18:09.918 "digest": "sha512", 00:18:09.918 "state": "completed" 00:18:09.918 }, 00:18:09.918 "cntlid": 111, 00:18:09.918 "listen_address": { 00:18:09.918 "adrfam": "IPv4", 00:18:09.918 "traddr": "10.0.0.2", 00:18:09.918 "trsvcid": "4420", 00:18:09.918 "trtype": "TCP" 00:18:09.918 }, 00:18:09.918 "peer_address": { 00:18:09.918 "adrfam": "IPv4", 00:18:09.918 "traddr": "10.0.0.1", 00:18:09.918 "trsvcid": "51160", 00:18:09.918 "trtype": "TCP" 00:18:09.918 }, 00:18:09.918 "qid": 0, 00:18:09.918 "state": "enabled", 00:18:09.918 "thread": "nvmf_tgt_poll_group_000" 00:18:09.918 } 00:18:09.918 ]' 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.918 07:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.176 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:10.743 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.744 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:10.744 07:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.744 07:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.002 07:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.002 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.002 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.002 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.002 07:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.002 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.568 00:18:11.568 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.568 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.568 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.827 { 00:18:11.827 "auth": { 00:18:11.827 "dhgroup": "ffdhe3072", 00:18:11.827 "digest": "sha512", 00:18:11.827 "state": "completed" 00:18:11.827 }, 00:18:11.827 "cntlid": 113, 00:18:11.827 "listen_address": { 00:18:11.827 "adrfam": "IPv4", 00:18:11.827 "traddr": "10.0.0.2", 00:18:11.827 "trsvcid": "4420", 00:18:11.827 "trtype": "TCP" 00:18:11.827 }, 00:18:11.827 "peer_address": { 00:18:11.827 "adrfam": "IPv4", 00:18:11.827 "traddr": "10.0.0.1", 00:18:11.827 "trsvcid": "51174", 00:18:11.827 "trtype": "TCP" 00:18:11.827 }, 00:18:11.827 "qid": 0, 00:18:11.827 "state": "enabled", 00:18:11.827 "thread": "nvmf_tgt_poll_group_000" 00:18:11.827 } 00:18:11.827 ]' 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.827 07:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.085 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.652 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.218 07:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.218 07:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.218 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.218 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.475 00:18:13.475 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.475 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.475 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.733 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.733 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.733 07:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.733 07:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.733 07:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.733 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.733 { 00:18:13.733 "auth": { 00:18:13.733 "dhgroup": "ffdhe3072", 00:18:13.733 "digest": "sha512", 00:18:13.734 "state": "completed" 00:18:13.734 }, 00:18:13.734 "cntlid": 115, 00:18:13.734 "listen_address": { 00:18:13.734 "adrfam": "IPv4", 00:18:13.734 "traddr": "10.0.0.2", 00:18:13.734 "trsvcid": "4420", 00:18:13.734 "trtype": "TCP" 00:18:13.734 }, 00:18:13.734 "peer_address": { 00:18:13.734 "adrfam": "IPv4", 00:18:13.734 "traddr": "10.0.0.1", 00:18:13.734 "trsvcid": "51202", 00:18:13.734 "trtype": "TCP" 00:18:13.734 }, 00:18:13.734 "qid": 0, 00:18:13.734 "state": "enabled", 00:18:13.734 "thread": "nvmf_tgt_poll_group_000" 00:18:13.734 } 00:18:13.734 ]' 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.734 07:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.300 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:18:14.867 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.867 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:14.867 07:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.867 07:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.867 07:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.868 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.868 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:14.868 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.138 07:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.395 00:18:15.395 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.395 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.395 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.652 { 00:18:15.652 "auth": { 00:18:15.652 "dhgroup": "ffdhe3072", 00:18:15.652 "digest": "sha512", 00:18:15.652 "state": "completed" 00:18:15.652 }, 00:18:15.652 "cntlid": 117, 00:18:15.652 "listen_address": { 00:18:15.652 "adrfam": "IPv4", 00:18:15.652 "traddr": "10.0.0.2", 00:18:15.652 "trsvcid": "4420", 00:18:15.652 "trtype": "TCP" 00:18:15.652 }, 00:18:15.652 "peer_address": { 00:18:15.652 "adrfam": "IPv4", 00:18:15.652 "traddr": "10.0.0.1", 00:18:15.652 "trsvcid": "51210", 00:18:15.652 "trtype": "TCP" 00:18:15.652 }, 00:18:15.652 "qid": 0, 00:18:15.652 "state": "enabled", 00:18:15.652 "thread": "nvmf_tgt_poll_group_000" 00:18:15.652 } 00:18:15.652 ]' 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.652 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.910 07:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:16.851 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.108 07:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.366 00:18:17.366 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.366 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.366 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.625 { 00:18:17.625 "auth": { 00:18:17.625 "dhgroup": "ffdhe3072", 00:18:17.625 "digest": "sha512", 00:18:17.625 "state": "completed" 00:18:17.625 }, 00:18:17.625 "cntlid": 119, 00:18:17.625 "listen_address": { 00:18:17.625 "adrfam": "IPv4", 00:18:17.625 "traddr": "10.0.0.2", 00:18:17.625 "trsvcid": "4420", 00:18:17.625 "trtype": "TCP" 00:18:17.625 }, 00:18:17.625 "peer_address": { 00:18:17.625 "adrfam": "IPv4", 00:18:17.625 "traddr": "10.0.0.1", 00:18:17.625 "trsvcid": "52542", 00:18:17.625 "trtype": "TCP" 00:18:17.625 }, 00:18:17.625 "qid": 0, 00:18:17.625 "state": "enabled", 00:18:17.625 "thread": "nvmf_tgt_poll_group_000" 00:18:17.625 } 00:18:17.625 ]' 00:18:17.625 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.883 07:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.141 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.708 07:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.277 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.536 00:18:19.536 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.536 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.536 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.795 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.795 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.796 { 00:18:19.796 "auth": { 00:18:19.796 "dhgroup": "ffdhe4096", 00:18:19.796 "digest": "sha512", 00:18:19.796 "state": "completed" 00:18:19.796 }, 00:18:19.796 "cntlid": 121, 00:18:19.796 "listen_address": { 00:18:19.796 "adrfam": "IPv4", 00:18:19.796 "traddr": "10.0.0.2", 00:18:19.796 "trsvcid": "4420", 00:18:19.796 "trtype": "TCP" 00:18:19.796 }, 00:18:19.796 "peer_address": { 00:18:19.796 "adrfam": "IPv4", 00:18:19.796 "traddr": "10.0.0.1", 00:18:19.796 "trsvcid": "52570", 00:18:19.796 "trtype": "TCP" 00:18:19.796 }, 00:18:19.796 "qid": 0, 00:18:19.796 "state": "enabled", 00:18:19.796 "thread": "nvmf_tgt_poll_group_000" 00:18:19.796 } 00:18:19.796 ]' 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.796 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.055 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.055 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.055 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.055 07:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.314 07:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:18:20.881 07:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.881 07:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:20.882 07:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.882 07:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.882 07:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.882 07:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.882 07:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.882 07:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.141 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.709 00:18:21.709 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.709 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.709 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.968 { 00:18:21.968 "auth": { 00:18:21.968 "dhgroup": "ffdhe4096", 00:18:21.968 "digest": "sha512", 00:18:21.968 "state": "completed" 00:18:21.968 }, 00:18:21.968 "cntlid": 123, 00:18:21.968 "listen_address": { 00:18:21.968 "adrfam": "IPv4", 00:18:21.968 "traddr": "10.0.0.2", 00:18:21.968 "trsvcid": "4420", 00:18:21.968 "trtype": "TCP" 00:18:21.968 }, 00:18:21.968 "peer_address": { 00:18:21.968 "adrfam": "IPv4", 00:18:21.968 "traddr": "10.0.0.1", 00:18:21.968 "trsvcid": "52590", 00:18:21.968 "trtype": "TCP" 00:18:21.968 }, 00:18:21.968 "qid": 0, 00:18:21.968 "state": "enabled", 00:18:21.968 "thread": "nvmf_tgt_poll_group_000" 00:18:21.968 } 00:18:21.968 ]' 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.968 07:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.968 07:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.968 07:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.968 07:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.537 07:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.104 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.363 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.620 00:18:23.620 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.620 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.620 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.186 { 00:18:24.186 "auth": { 00:18:24.186 "dhgroup": "ffdhe4096", 00:18:24.186 "digest": "sha512", 00:18:24.186 "state": "completed" 00:18:24.186 }, 00:18:24.186 "cntlid": 125, 00:18:24.186 "listen_address": { 00:18:24.186 "adrfam": "IPv4", 00:18:24.186 "traddr": "10.0.0.2", 00:18:24.186 "trsvcid": "4420", 00:18:24.186 "trtype": "TCP" 00:18:24.186 }, 00:18:24.186 "peer_address": { 00:18:24.186 "adrfam": "IPv4", 00:18:24.186 "traddr": "10.0.0.1", 00:18:24.186 "trsvcid": "52616", 00:18:24.186 "trtype": "TCP" 00:18:24.186 }, 00:18:24.186 "qid": 0, 00:18:24.186 "state": "enabled", 00:18:24.186 "thread": "nvmf_tgt_poll_group_000" 00:18:24.186 } 00:18:24.186 ]' 00:18:24.186 07:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.186 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.444 07:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.377 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.943 00:18:25.943 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.943 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.943 07:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.202 { 00:18:26.202 "auth": { 00:18:26.202 "dhgroup": "ffdhe4096", 00:18:26.202 "digest": "sha512", 00:18:26.202 "state": "completed" 00:18:26.202 }, 00:18:26.202 "cntlid": 127, 00:18:26.202 "listen_address": { 00:18:26.202 "adrfam": "IPv4", 00:18:26.202 "traddr": "10.0.0.2", 00:18:26.202 "trsvcid": "4420", 00:18:26.202 "trtype": "TCP" 00:18:26.202 }, 00:18:26.202 "peer_address": { 00:18:26.202 "adrfam": "IPv4", 00:18:26.202 "traddr": "10.0.0.1", 00:18:26.202 "trsvcid": "52648", 00:18:26.202 "trtype": "TCP" 00:18:26.202 }, 00:18:26.202 "qid": 0, 00:18:26.202 "state": "enabled", 00:18:26.202 "thread": "nvmf_tgt_poll_group_000" 00:18:26.202 } 00:18:26.202 ]' 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.202 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.769 07:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.337 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.596 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.855 00:18:27.855 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.855 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.855 07:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.114 { 00:18:28.114 "auth": { 00:18:28.114 "dhgroup": "ffdhe6144", 00:18:28.114 "digest": "sha512", 00:18:28.114 "state": "completed" 00:18:28.114 }, 00:18:28.114 "cntlid": 129, 00:18:28.114 "listen_address": { 00:18:28.114 "adrfam": "IPv4", 00:18:28.114 "traddr": "10.0.0.2", 00:18:28.114 "trsvcid": "4420", 00:18:28.114 "trtype": "TCP" 00:18:28.114 }, 00:18:28.114 "peer_address": { 00:18:28.114 "adrfam": "IPv4", 00:18:28.114 "traddr": "10.0.0.1", 00:18:28.114 "trsvcid": "44924", 00:18:28.114 "trtype": "TCP" 00:18:28.114 }, 00:18:28.114 "qid": 0, 00:18:28.114 "state": "enabled", 00:18:28.114 "thread": "nvmf_tgt_poll_group_000" 00:18:28.114 } 00:18:28.114 ]' 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.114 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.372 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.372 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.372 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.372 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.372 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.372 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.631 07:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:29.198 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.457 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.458 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.025 00:18:30.025 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.025 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.025 07:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.025 { 00:18:30.025 "auth": { 00:18:30.025 "dhgroup": "ffdhe6144", 00:18:30.025 "digest": "sha512", 00:18:30.025 "state": "completed" 00:18:30.025 }, 00:18:30.025 "cntlid": 131, 00:18:30.025 "listen_address": { 00:18:30.025 "adrfam": "IPv4", 00:18:30.025 "traddr": "10.0.0.2", 00:18:30.025 "trsvcid": "4420", 00:18:30.025 "trtype": "TCP" 00:18:30.025 }, 00:18:30.025 "peer_address": { 00:18:30.025 "adrfam": "IPv4", 00:18:30.025 "traddr": "10.0.0.1", 00:18:30.025 "trsvcid": "44952", 00:18:30.025 "trtype": "TCP" 00:18:30.025 }, 00:18:30.025 "qid": 0, 00:18:30.025 "state": "enabled", 00:18:30.025 "thread": "nvmf_tgt_poll_group_000" 00:18:30.025 } 00:18:30.025 ]' 00:18:30.025 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.284 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.543 07:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.111 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.371 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.938 00:18:31.938 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.938 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.938 07:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.196 { 00:18:32.196 "auth": { 00:18:32.196 "dhgroup": "ffdhe6144", 00:18:32.196 "digest": "sha512", 00:18:32.196 "state": "completed" 00:18:32.196 }, 00:18:32.196 "cntlid": 133, 00:18:32.196 "listen_address": { 00:18:32.196 "adrfam": "IPv4", 00:18:32.196 "traddr": "10.0.0.2", 00:18:32.196 "trsvcid": "4420", 00:18:32.196 "trtype": "TCP" 00:18:32.196 }, 00:18:32.196 "peer_address": { 00:18:32.196 "adrfam": "IPv4", 00:18:32.196 "traddr": "10.0.0.1", 00:18:32.196 "trsvcid": "44972", 00:18:32.196 "trtype": "TCP" 00:18:32.196 }, 00:18:32.196 "qid": 0, 00:18:32.196 "state": "enabled", 00:18:32.196 "thread": "nvmf_tgt_poll_group_000" 00:18:32.196 } 00:18:32.196 ]' 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.196 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.455 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.455 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.455 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.455 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.455 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.714 07:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.282 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.849 07:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.108 00:18:34.108 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.108 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.108 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.366 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.366 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.366 07:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.366 07:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.624 { 00:18:34.624 "auth": { 00:18:34.624 "dhgroup": "ffdhe6144", 00:18:34.624 "digest": "sha512", 00:18:34.624 "state": "completed" 00:18:34.624 }, 00:18:34.624 "cntlid": 135, 00:18:34.624 "listen_address": { 00:18:34.624 "adrfam": "IPv4", 00:18:34.624 "traddr": "10.0.0.2", 00:18:34.624 "trsvcid": "4420", 00:18:34.624 "trtype": "TCP" 00:18:34.624 }, 00:18:34.624 "peer_address": { 00:18:34.624 "adrfam": "IPv4", 00:18:34.624 "traddr": "10.0.0.1", 00:18:34.624 "trsvcid": "45002", 00:18:34.624 "trtype": "TCP" 00:18:34.624 }, 00:18:34.624 "qid": 0, 00:18:34.624 "state": "enabled", 00:18:34.624 "thread": "nvmf_tgt_poll_group_000" 00:18:34.624 } 00:18:34.624 ]' 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.624 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.883 07:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.450 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.709 07:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.276 00:18:36.276 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.277 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.277 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.536 { 00:18:36.536 "auth": { 00:18:36.536 "dhgroup": "ffdhe8192", 00:18:36.536 "digest": "sha512", 00:18:36.536 "state": "completed" 00:18:36.536 }, 00:18:36.536 "cntlid": 137, 00:18:36.536 "listen_address": { 00:18:36.536 "adrfam": "IPv4", 00:18:36.536 "traddr": "10.0.0.2", 00:18:36.536 "trsvcid": "4420", 00:18:36.536 "trtype": "TCP" 00:18:36.536 }, 00:18:36.536 "peer_address": { 00:18:36.536 "adrfam": "IPv4", 00:18:36.536 "traddr": "10.0.0.1", 00:18:36.536 "trsvcid": "45036", 00:18:36.536 "trtype": "TCP" 00:18:36.536 }, 00:18:36.536 "qid": 0, 00:18:36.536 "state": "enabled", 00:18:36.536 "thread": "nvmf_tgt_poll_group_000" 00:18:36.536 } 00:18:36.536 ]' 00:18:36.536 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.795 07:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.054 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:18:37.622 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:37.881 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.139 07:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.139 07:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.139 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.139 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.706 00:18:38.706 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.706 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.706 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.964 { 00:18:38.964 "auth": { 00:18:38.964 "dhgroup": "ffdhe8192", 00:18:38.964 "digest": "sha512", 00:18:38.964 "state": "completed" 00:18:38.964 }, 00:18:38.964 "cntlid": 139, 00:18:38.964 "listen_address": { 00:18:38.964 "adrfam": "IPv4", 00:18:38.964 "traddr": "10.0.0.2", 00:18:38.964 "trsvcid": "4420", 00:18:38.964 "trtype": "TCP" 00:18:38.964 }, 00:18:38.964 "peer_address": { 00:18:38.964 "adrfam": "IPv4", 00:18:38.964 "traddr": "10.0.0.1", 00:18:38.964 "trsvcid": "53020", 00:18:38.964 "trtype": "TCP" 00:18:38.964 }, 00:18:38.964 "qid": 0, 00:18:38.964 "state": "enabled", 00:18:38.964 "thread": "nvmf_tgt_poll_group_000" 00:18:38.964 } 00:18:38.964 ]' 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.964 07:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.964 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.964 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.222 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.222 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.222 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.480 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:01:NjVjNjU1MzQ5ODA1YWU0ODI5M2NkMjQxOGQ2NzJkOGNvTY1Y: --dhchap-ctrl-secret DHHC-1:02:M2JhNGYzOTgxYzU3ZTViM2M2MjdiMGZmNjk3OTAyYTdiOWE4ODcwNDJlZjE3NmE3yO2itQ==: 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:40.048 07:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.307 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.875 00:18:40.875 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.875 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.875 07:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.134 { 00:18:41.134 "auth": { 00:18:41.134 "dhgroup": "ffdhe8192", 00:18:41.134 "digest": "sha512", 00:18:41.134 "state": "completed" 00:18:41.134 }, 00:18:41.134 "cntlid": 141, 00:18:41.134 "listen_address": { 00:18:41.134 "adrfam": "IPv4", 00:18:41.134 "traddr": "10.0.0.2", 00:18:41.134 "trsvcid": "4420", 00:18:41.134 "trtype": "TCP" 00:18:41.134 }, 00:18:41.134 "peer_address": { 00:18:41.134 "adrfam": "IPv4", 00:18:41.134 "traddr": "10.0.0.1", 00:18:41.134 "trsvcid": "53056", 00:18:41.134 "trtype": "TCP" 00:18:41.134 }, 00:18:41.134 "qid": 0, 00:18:41.134 "state": "enabled", 00:18:41.134 "thread": "nvmf_tgt_poll_group_000" 00:18:41.134 } 00:18:41.134 ]' 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.134 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.393 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.393 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.393 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.393 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.393 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.652 07:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:02:YzVjZGMyZWU3YTA5YWExMDgzZGMzM2VkMzFkYjU1MDQwYTY3MjViZjU4NmVhYjg4HQEYHA==: --dhchap-ctrl-secret DHHC-1:01:ZTk3OTg4ZGJjMTVmMThiYWVjNmM5ODc1NzU4YmQ5ZjJkfl9I: 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.220 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.479 07:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.740 07:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.740 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.740 07:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.344 00:18:43.344 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.344 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.344 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.626 { 00:18:43.626 "auth": { 00:18:43.626 "dhgroup": "ffdhe8192", 00:18:43.626 "digest": "sha512", 00:18:43.626 "state": "completed" 00:18:43.626 }, 00:18:43.626 "cntlid": 143, 00:18:43.626 "listen_address": { 00:18:43.626 "adrfam": "IPv4", 00:18:43.626 "traddr": "10.0.0.2", 00:18:43.626 "trsvcid": "4420", 00:18:43.626 "trtype": "TCP" 00:18:43.626 }, 00:18:43.626 "peer_address": { 00:18:43.626 "adrfam": "IPv4", 00:18:43.626 "traddr": "10.0.0.1", 00:18:43.626 "trsvcid": "53078", 00:18:43.626 "trtype": "TCP" 00:18:43.626 }, 00:18:43.626 "qid": 0, 00:18:43.626 "state": "enabled", 00:18:43.626 "thread": "nvmf_tgt_poll_group_000" 00:18:43.626 } 00:18:43.626 ]' 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.626 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.884 07:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.821 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.822 07:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.388 00:18:45.388 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.388 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.388 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.955 { 00:18:45.955 "auth": { 00:18:45.955 "dhgroup": "ffdhe8192", 00:18:45.955 "digest": "sha512", 00:18:45.955 "state": "completed" 00:18:45.955 }, 00:18:45.955 "cntlid": 145, 00:18:45.955 "listen_address": { 00:18:45.955 "adrfam": "IPv4", 00:18:45.955 "traddr": "10.0.0.2", 00:18:45.955 "trsvcid": "4420", 00:18:45.955 "trtype": "TCP" 00:18:45.955 }, 00:18:45.955 "peer_address": { 00:18:45.955 "adrfam": "IPv4", 00:18:45.955 "traddr": "10.0.0.1", 00:18:45.955 "trsvcid": "53118", 00:18:45.955 "trtype": "TCP" 00:18:45.955 }, 00:18:45.955 "qid": 0, 00:18:45.955 "state": "enabled", 00:18:45.955 "thread": "nvmf_tgt_poll_group_000" 00:18:45.955 } 00:18:45.955 ]' 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.955 07:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.215 07:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:00:NGE3OTAwNTc4ZjQ3Zjk3YzBmNzdiYzc3YWU2ZDI1M2NhMjUxYmQ5NjVjYmI1ZjlhK1CfTQ==: --dhchap-ctrl-secret DHHC-1:03:NmJkMDFlM2JjNmI3ZjFmZWIzNmU5MTUwYWFjOWQ5OTZlM2U0NGJkMGMxOTQ1OTRkYTc4NjE2ZGY5ODk1NjEwY2Mm2FU=: 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.782 07:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:47.350 2024/07/13 07:04:55 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:47.350 request: 00:18:47.350 { 00:18:47.350 "method": "bdev_nvme_attach_controller", 00:18:47.350 "params": { 00:18:47.350 "name": "nvme0", 00:18:47.350 "trtype": "tcp", 00:18:47.350 "traddr": "10.0.0.2", 00:18:47.350 "adrfam": "ipv4", 00:18:47.350 "trsvcid": "4420", 00:18:47.350 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd", 00:18:47.350 "prchk_reftag": false, 00:18:47.350 "prchk_guard": false, 00:18:47.350 "hdgst": false, 00:18:47.350 "ddgst": false, 00:18:47.350 "dhchap_key": "key2" 00:18:47.350 } 00:18:47.350 } 00:18:47.350 Got JSON-RPC error response 00:18:47.350 GoRPCClient: error on JSON-RPC call 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:47.608 07:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:48.175 2024/07/13 07:04:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:48.175 request: 00:18:48.175 { 00:18:48.175 "method": "bdev_nvme_attach_controller", 00:18:48.175 "params": { 00:18:48.175 "name": "nvme0", 00:18:48.175 "trtype": "tcp", 00:18:48.175 "traddr": "10.0.0.2", 00:18:48.175 "adrfam": "ipv4", 00:18:48.175 "trsvcid": "4420", 00:18:48.175 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd", 00:18:48.175 "prchk_reftag": false, 00:18:48.175 "prchk_guard": false, 00:18:48.175 "hdgst": false, 00:18:48.175 "ddgst": false, 00:18:48.175 "dhchap_key": "key1", 00:18:48.175 "dhchap_ctrlr_key": "ckey2" 00:18:48.175 } 00:18:48.175 } 00:18:48.175 Got JSON-RPC error response 00:18:48.175 GoRPCClient: error on JSON-RPC call 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key1 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.175 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.742 2024/07/13 07:04:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:48.742 request: 00:18:48.742 { 00:18:48.742 "method": "bdev_nvme_attach_controller", 00:18:48.742 "params": { 00:18:48.742 "name": "nvme0", 00:18:48.742 "trtype": "tcp", 00:18:48.742 "traddr": "10.0.0.2", 00:18:48.742 "adrfam": "ipv4", 00:18:48.742 "trsvcid": "4420", 00:18:48.742 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd", 00:18:48.742 "prchk_reftag": false, 00:18:48.742 "prchk_guard": false, 00:18:48.742 "hdgst": false, 00:18:48.742 "ddgst": false, 00:18:48.742 "dhchap_key": "key1", 00:18:48.742 "dhchap_ctrlr_key": "ckey1" 00:18:48.742 } 00:18:48.742 } 00:18:48.742 Got JSON-RPC error response 00:18:48.742 GoRPCClient: error on JSON-RPC call 00:18:48.742 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:48.742 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 93893 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 93893 ']' 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 93893 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93893 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.743 killing process with pid 93893 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93893' 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 93893 00:18:48.743 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 93893 00:18:49.001 07:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:49.001 07:04:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.001 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.001 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.001 07:04:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=98682 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 98682 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 98682 ']' 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.002 07:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 98682 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 98682 ']' 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.938 07:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.196 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.196 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:50.196 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:50.196 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.196 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.455 07:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.023 00:18:51.023 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.023 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.023 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.283 { 00:18:51.283 "auth": { 00:18:51.283 "dhgroup": "ffdhe8192", 00:18:51.283 "digest": "sha512", 00:18:51.283 "state": "completed" 00:18:51.283 }, 00:18:51.283 "cntlid": 1, 00:18:51.283 "listen_address": { 00:18:51.283 "adrfam": "IPv4", 00:18:51.283 "traddr": "10.0.0.2", 00:18:51.283 "trsvcid": "4420", 00:18:51.283 "trtype": "TCP" 00:18:51.283 }, 00:18:51.283 "peer_address": { 00:18:51.283 "adrfam": "IPv4", 00:18:51.283 "traddr": "10.0.0.1", 00:18:51.283 "trsvcid": "57590", 00:18:51.283 "trtype": "TCP" 00:18:51.283 }, 00:18:51.283 "qid": 0, 00:18:51.283 "state": "enabled", 00:18:51.283 "thread": "nvmf_tgt_poll_group_000" 00:18:51.283 } 00:18:51.283 ]' 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.283 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.541 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.541 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.541 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.799 07:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid 43021b44-defc-4eee-995c-65b6e79138bd --dhchap-secret DHHC-1:03:OTU2Yjg5ZWNmMTVjN2E0YjU4MTJiNGVhOGMwZmM5OWMxMDQ4M2RlNmQwZmIwYTk1ZGI2ODUxNDc5NGZkYWFjOahaUFs=: 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --dhchap-key key3 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:52.366 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.624 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.883 2024/07/13 07:05:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:52.883 request: 00:18:52.883 { 00:18:52.883 "method": "bdev_nvme_attach_controller", 00:18:52.883 "params": { 00:18:52.883 "name": "nvme0", 00:18:52.883 "trtype": "tcp", 00:18:52.883 "traddr": "10.0.0.2", 00:18:52.883 "adrfam": "ipv4", 00:18:52.883 "trsvcid": "4420", 00:18:52.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd", 00:18:52.883 "prchk_reftag": false, 00:18:52.883 "prchk_guard": false, 00:18:52.883 "hdgst": false, 00:18:52.883 "ddgst": false, 00:18:52.883 "dhchap_key": "key3" 00:18:52.883 } 00:18:52.883 } 00:18:52.883 Got JSON-RPC error response 00:18:52.883 GoRPCClient: error on JSON-RPC call 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:52.883 07:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.450 2024/07/13 07:05:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:53.450 request: 00:18:53.450 { 00:18:53.450 "method": "bdev_nvme_attach_controller", 00:18:53.450 "params": { 00:18:53.450 "name": "nvme0", 00:18:53.450 "trtype": "tcp", 00:18:53.450 "traddr": "10.0.0.2", 00:18:53.450 "adrfam": "ipv4", 00:18:53.450 "trsvcid": "4420", 00:18:53.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:53.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd", 00:18:53.450 "prchk_reftag": false, 00:18:53.450 "prchk_guard": false, 00:18:53.450 "hdgst": false, 00:18:53.450 "ddgst": false, 00:18:53.450 "dhchap_key": "key3" 00:18:53.450 } 00:18:53.450 } 00:18:53.450 Got JSON-RPC error response 00:18:53.450 GoRPCClient: error on JSON-RPC call 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:53.450 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.709 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:53.968 07:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:53.968 2024/07/13 07:05:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:53.968 request: 00:18:53.968 { 00:18:53.968 "method": "bdev_nvme_attach_controller", 00:18:53.968 "params": { 00:18:53.968 "name": "nvme0", 00:18:53.968 "trtype": "tcp", 00:18:53.968 "traddr": "10.0.0.2", 00:18:53.968 "adrfam": "ipv4", 00:18:53.968 "trsvcid": "4420", 00:18:53.968 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:53.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd", 00:18:53.968 "prchk_reftag": false, 00:18:53.968 "prchk_guard": false, 00:18:53.968 "hdgst": false, 00:18:53.968 "ddgst": false, 00:18:53.968 "dhchap_key": "key0", 00:18:53.968 "dhchap_ctrlr_key": "key1" 00:18:53.968 } 00:18:53.968 } 00:18:53.968 Got JSON-RPC error response 00:18:53.968 GoRPCClient: error on JSON-RPC call 00:18:53.968 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:53.968 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:53.968 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:53.968 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:53.968 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:53.968 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:54.226 00:18:54.485 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:54.485 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:54.485 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 93937 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 93937 ']' 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 93937 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93937 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:54.743 killing process with pid 93937 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93937' 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 93937 00:18:54.743 07:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 93937 00:18:55.308 07:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:55.308 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.308 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:55.565 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.565 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:55.565 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.565 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.565 rmmod nvme_tcp 00:18:55.565 rmmod nvme_fabrics 00:18:55.565 rmmod nvme_keyring 00:18:55.565 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 98682 ']' 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 98682 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 98682 ']' 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 98682 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98682 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98682' 00:18:55.566 killing process with pid 98682 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 98682 00:18:55.566 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 98682 00:18:55.823 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.823 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FpN /tmp/spdk.key-sha256.m8z /tmp/spdk.key-sha384.D36 /tmp/spdk.key-sha512.4KS /tmp/spdk.key-sha512.F0b /tmp/spdk.key-sha384.vRO /tmp/spdk.key-sha256.NQk '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:55.824 00:18:55.824 real 2m44.655s 00:18:55.824 user 6m38.145s 00:18:55.824 sys 0m22.323s 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:55.824 ************************************ 00:18:55.824 07:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.824 END TEST nvmf_auth_target 00:18:55.824 ************************************ 00:18:55.824 07:05:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:55.824 07:05:03 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:55.824 07:05:03 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.824 07:05:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:55.824 07:05:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.824 07:05:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.824 ************************************ 00:18:55.824 START TEST nvmf_bdevio_no_huge 00:18:55.824 ************************************ 00:18:55.824 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:56.082 * Looking for test storage... 00:18:56.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:56.082 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:56.083 07:05:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:56.083 Cannot find device "nvmf_tgt_br" 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:56.083 Cannot find device "nvmf_tgt_br2" 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:56.083 Cannot find device "nvmf_tgt_br" 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:56.083 Cannot find device "nvmf_tgt_br2" 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:56.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:56.083 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:56.083 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:56.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:56.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:18:56.389 00:18:56.389 --- 10.0.0.2 ping statistics --- 00:18:56.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.389 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:56.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:56.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:18:56.389 00:18:56.389 --- 10.0.0.3 ping statistics --- 00:18:56.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.389 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:56.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:56.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:18:56.389 00:18:56.389 --- 10.0.0.1 ping statistics --- 00:18:56.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:56.389 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:56.389 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=99084 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 99084 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 99084 ']' 00:18:56.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:56.390 07:05:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.390 [2024-07-13 07:05:04.404785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:56.390 [2024-07-13 07:05:04.404899] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:56.657 [2024-07-13 07:05:04.541845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.657 [2024-07-13 07:05:04.643335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.657 [2024-07-13 07:05:04.643401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.657 [2024-07-13 07:05:04.643428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.657 [2024-07-13 07:05:04.643436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.657 [2024-07-13 07:05:04.643442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.657 [2024-07-13 07:05:04.644324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:56.657 [2024-07-13 07:05:04.644457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:56.657 [2024-07-13 07:05:04.644580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:56.657 [2024-07-13 07:05:04.644841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.591 [2024-07-13 07:05:05.480263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.591 Malloc0 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.591 [2024-07-13 07:05:05.528540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:57.591 { 00:18:57.591 "params": { 00:18:57.591 "name": "Nvme$subsystem", 00:18:57.591 "trtype": "$TEST_TRANSPORT", 00:18:57.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.591 "adrfam": "ipv4", 00:18:57.591 "trsvcid": "$NVMF_PORT", 00:18:57.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.591 "hdgst": ${hdgst:-false}, 00:18:57.591 "ddgst": ${ddgst:-false} 00:18:57.591 }, 00:18:57.591 "method": "bdev_nvme_attach_controller" 00:18:57.591 } 00:18:57.591 EOF 00:18:57.591 )") 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:57.591 07:05:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:57.591 "params": { 00:18:57.591 "name": "Nvme1", 00:18:57.591 "trtype": "tcp", 00:18:57.591 "traddr": "10.0.0.2", 00:18:57.591 "adrfam": "ipv4", 00:18:57.591 "trsvcid": "4420", 00:18:57.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.591 "hdgst": false, 00:18:57.591 "ddgst": false 00:18:57.591 }, 00:18:57.591 "method": "bdev_nvme_attach_controller" 00:18:57.591 }' 00:18:57.591 [2024-07-13 07:05:05.593816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:57.591 [2024-07-13 07:05:05.593947] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid99138 ] 00:18:57.849 [2024-07-13 07:05:05.740167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:57.850 [2024-07-13 07:05:05.868015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.850 [2024-07-13 07:05:05.868162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.850 [2024-07-13 07:05:05.868167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.108 I/O targets: 00:18:58.108 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:58.108 00:18:58.108 00:18:58.108 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.108 http://cunit.sourceforge.net/ 00:18:58.108 00:18:58.108 00:18:58.108 Suite: bdevio tests on: Nvme1n1 00:18:58.108 Test: blockdev write read block ...passed 00:18:58.108 Test: blockdev write zeroes read block ...passed 00:18:58.108 Test: blockdev write zeroes read no split ...passed 00:18:58.108 Test: blockdev write zeroes read split ...passed 00:18:58.367 Test: blockdev write zeroes read split partial ...passed 00:18:58.367 Test: blockdev reset ...[2024-07-13 07:05:06.187720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.367 [2024-07-13 07:05:06.188088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37b50 (9): Bad file descriptor 00:18:58.367 [2024-07-13 07:05:06.204908] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.367 passed 00:18:58.367 Test: blockdev write read 8 blocks ...passed 00:18:58.367 Test: blockdev write read size > 128k ...passed 00:18:58.367 Test: blockdev write read invalid size ...passed 00:18:58.367 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:58.367 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:58.367 Test: blockdev write read max offset ...passed 00:18:58.367 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:58.367 Test: blockdev writev readv 8 blocks ...passed 00:18:58.367 Test: blockdev writev readv 30 x 1block ...passed 00:18:58.367 Test: blockdev writev readv block ...passed 00:18:58.367 Test: blockdev writev readv size > 128k ...passed 00:18:58.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:58.367 Test: blockdev comparev and writev ...[2024-07-13 07:05:06.381271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.381325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.381362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.381385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.381818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.381838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.381855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.381865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.382261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.382292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.382310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.382320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.382684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.382705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:58.367 [2024-07-13 07:05:06.382721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:58.367 [2024-07-13 07:05:06.382731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:58.368 passed 00:18:58.625 Test: blockdev nvme passthru rw ...passed 00:18:58.625 Test: blockdev nvme passthru vendor specific ...[2024-07-13 07:05:06.466900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.625 [2024-07-13 07:05:06.466941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:58.625 [2024-07-13 07:05:06.467074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.625 [2024-07-13 07:05:06.467090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:58.625 [2024-07-13 07:05:06.467206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.625 [2024-07-13 07:05:06.467221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:58.625 [2024-07-13 07:05:06.467354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:58.625 [2024-07-13 07:05:06.467369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:58.625 passed 00:18:58.625 Test: blockdev nvme admin passthru ...passed 00:18:58.625 Test: blockdev copy ...passed 00:18:58.625 00:18:58.625 Run Summary: Type Total Ran Passed Failed Inactive 00:18:58.625 suites 1 1 n/a 0 0 00:18:58.625 tests 23 23 23 0 0 00:18:58.625 asserts 152 152 152 0 n/a 00:18:58.625 00:18:58.625 Elapsed time = 0.929 seconds 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.883 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:59.141 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.141 07:05:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.141 rmmod nvme_tcp 00:18:59.141 rmmod nvme_fabrics 00:18:59.141 rmmod nvme_keyring 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 99084 ']' 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 99084 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 99084 ']' 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 99084 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99084 00:18:59.141 killing process with pid 99084 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99084' 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 99084 00:18:59.141 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 99084 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.399 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.659 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:59.659 00:18:59.659 real 0m3.626s 00:18:59.659 user 0m13.145s 00:18:59.659 sys 0m1.406s 00:18:59.659 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.659 07:05:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:59.659 ************************************ 00:18:59.659 END TEST nvmf_bdevio_no_huge 00:18:59.659 ************************************ 00:18:59.659 07:05:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:59.659 07:05:07 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:59.659 07:05:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:59.659 07:05:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.659 07:05:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.659 ************************************ 00:18:59.659 START TEST nvmf_tls 00:18:59.659 ************************************ 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:59.659 * Looking for test storage... 00:18:59.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.659 07:05:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:59.660 Cannot find device "nvmf_tgt_br" 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.660 Cannot find device "nvmf_tgt_br2" 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:59.660 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:59.920 Cannot find device "nvmf_tgt_br" 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:59.920 Cannot find device "nvmf_tgt_br2" 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:59.920 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.179 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:00.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:19:00.179 00:19:00.179 --- 10.0.0.2 ping statistics --- 00:19:00.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.179 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:00.179 07:05:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:00.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:00.179 00:19:00.179 --- 10.0.0.3 ping statistics --- 00:19:00.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.179 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:00.179 00:19:00.179 --- 10.0.0.1 ping statistics --- 00:19:00.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.179 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99323 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99323 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99323 ']' 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.179 07:05:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.179 [2024-07-13 07:05:08.086486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:00.179 [2024-07-13 07:05:08.086613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.179 [2024-07-13 07:05:08.227015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.439 [2024-07-13 07:05:08.339161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.439 [2024-07-13 07:05:08.339250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.439 [2024-07-13 07:05:08.339275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.439 [2024-07-13 07:05:08.339288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.439 [2024-07-13 07:05:08.339299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.439 [2024-07-13 07:05:08.339341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:01.007 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:01.266 true 00:19:01.266 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.266 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:01.525 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:01.525 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:01.525 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:01.785 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.785 07:05:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:02.043 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:02.043 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:02.043 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:02.302 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.302 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:02.561 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:02.561 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:02.561 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.561 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:02.820 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:02.820 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:02.820 07:05:10 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:03.078 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.078 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:03.337 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:03.337 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:03.337 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:03.595 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:03.595 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.BfpHOxcyB0 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Nsljy50Gpp 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.BfpHOxcyB0 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Nsljy50Gpp 00:19:03.854 07:05:11 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:04.113 07:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:04.372 07:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.BfpHOxcyB0 00:19:04.372 07:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BfpHOxcyB0 00:19:04.372 07:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.631 [2024-07-13 07:05:12.582957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.631 07:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.890 07:05:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.147 [2024-07-13 07:05:12.986998] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.147 [2024-07-13 07:05:12.987295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.147 07:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.147 malloc0 00:19:05.405 07:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.405 07:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BfpHOxcyB0 00:19:05.663 [2024-07-13 07:05:13.623696] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:05.663 07:05:13 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BfpHOxcyB0 00:19:17.863 Initializing NVMe Controllers 00:19:17.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:17.863 Initialization complete. Launching workers. 00:19:17.863 ======================================================== 00:19:17.863 Latency(us) 00:19:17.863 Device Information : IOPS MiB/s Average min max 00:19:17.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9172.66 35.83 6979.03 1355.49 9002.99 00:19:17.863 ======================================================== 00:19:17.863 Total : 9172.66 35.83 6979.03 1355.49 9002.99 00:19:17.863 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BfpHOxcyB0 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BfpHOxcyB0' 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99676 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99676 /var/tmp/bdevperf.sock 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99676 ']' 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.863 07:05:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.863 [2024-07-13 07:05:23.888275] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:17.863 [2024-07-13 07:05:23.888383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99676 ] 00:19:17.863 [2024-07-13 07:05:24.034003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.863 [2024-07-13 07:05:24.151410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.863 07:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.863 07:05:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.863 07:05:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BfpHOxcyB0 00:19:17.863 [2024-07-13 07:05:25.086005] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.863 [2024-07-13 07:05:25.086173] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:17.863 TLSTESTn1 00:19:17.863 07:05:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:17.863 Running I/O for 10 seconds... 00:19:27.838 00:19:27.838 Latency(us) 00:19:27.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.838 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.838 Verification LBA range: start 0x0 length 0x2000 00:19:27.838 TLSTESTn1 : 10.03 3773.35 14.74 0.00 0.00 33846.21 6613.18 22758.87 00:19:27.838 =================================================================================================================== 00:19:27.838 Total : 3773.35 14.74 0.00 0.00 33846.21 6613.18 22758.87 00:19:27.838 0 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99676 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99676 ']' 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99676 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99676 00:19:27.838 killing process with pid 99676 00:19:27.838 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.838 00:19:27.838 Latency(us) 00:19:27.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.838 =================================================================================================================== 00:19:27.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99676' 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99676 00:19:27.838 [2024-07-13 07:05:35.342835] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99676 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nsljy50Gpp 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nsljy50Gpp 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nsljy50Gpp 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Nsljy50Gpp' 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99823 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99823 /var/tmp/bdevperf.sock 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99823 ']' 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.838 07:05:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.838 [2024-07-13 07:05:35.671325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:27.838 [2024-07-13 07:05:35.671429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99823 ] 00:19:27.838 [2024-07-13 07:05:35.810641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.838 [2024-07-13 07:05:35.911395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.776 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.776 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.776 07:05:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nsljy50Gpp 00:19:29.047 [2024-07-13 07:05:36.890030] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.047 [2024-07-13 07:05:36.890206] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:29.047 [2024-07-13 07:05:36.901534] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:29.047 [2024-07-13 07:05:36.902046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe46970 (107): Transport endpoint is not connected 00:19:29.047 [2024-07-13 07:05:36.903038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe46970 (9): Bad file descriptor 00:19:29.047 [2024-07-13 07:05:36.904034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.047 [2024-07-13 07:05:36.904067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:29.047 [2024-07-13 07:05:36.904080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.047 2024/07/13 07:05:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Nsljy50Gpp subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:29.047 request: 00:19:29.047 { 00:19:29.047 "method": "bdev_nvme_attach_controller", 00:19:29.047 "params": { 00:19:29.047 "name": "TLSTEST", 00:19:29.047 "trtype": "tcp", 00:19:29.047 "traddr": "10.0.0.2", 00:19:29.047 "adrfam": "ipv4", 00:19:29.047 "trsvcid": "4420", 00:19:29.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.047 "prchk_reftag": false, 00:19:29.047 "prchk_guard": false, 00:19:29.047 "hdgst": false, 00:19:29.047 "ddgst": false, 00:19:29.047 "psk": "/tmp/tmp.Nsljy50Gpp" 00:19:29.047 } 00:19:29.047 } 00:19:29.047 Got JSON-RPC error response 00:19:29.047 GoRPCClient: error on JSON-RPC call 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99823 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99823 ']' 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99823 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99823 00:19:29.047 killing process with pid 99823 00:19:29.047 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.047 00:19:29.047 Latency(us) 00:19:29.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.047 =================================================================================================================== 00:19:29.047 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99823' 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99823 00:19:29.047 [2024-07-13 07:05:36.953521] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:29.047 07:05:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99823 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BfpHOxcyB0 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BfpHOxcyB0 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BfpHOxcyB0 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.345 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BfpHOxcyB0' 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99873 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99873 /var/tmp/bdevperf.sock 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99873 ']' 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.346 07:05:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.346 [2024-07-13 07:05:37.320460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:29.346 [2024-07-13 07:05:37.320774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99873 ] 00:19:29.610 [2024-07-13 07:05:37.461988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.610 [2024-07-13 07:05:37.557058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.177 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.177 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.177 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.BfpHOxcyB0 00:19:30.436 [2024-07-13 07:05:38.499075] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.436 [2024-07-13 07:05:38.499233] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:30.436 [2024-07-13 07:05:38.504426] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:30.436 [2024-07-13 07:05:38.504484] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:30.436 [2024-07-13 07:05:38.504573] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:30.436 [2024-07-13 07:05:38.505137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbb970 (107): Transport endpoint is not connected 00:19:30.436 [2024-07-13 07:05:38.506125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbb970 (9): Bad file descriptor 00:19:30.436 [2024-07-13 07:05:38.507121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.436 [2024-07-13 07:05:38.507167] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:30.436 [2024-07-13 07:05:38.507198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.696 2024/07/13 07:05:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.BfpHOxcyB0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:30.696 request: 00:19:30.696 { 00:19:30.696 "method": "bdev_nvme_attach_controller", 00:19:30.696 "params": { 00:19:30.696 "name": "TLSTEST", 00:19:30.696 "trtype": "tcp", 00:19:30.696 "traddr": "10.0.0.2", 00:19:30.696 "adrfam": "ipv4", 00:19:30.696 "trsvcid": "4420", 00:19:30.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.696 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:30.696 "prchk_reftag": false, 00:19:30.696 "prchk_guard": false, 00:19:30.696 "hdgst": false, 00:19:30.696 "ddgst": false, 00:19:30.696 "psk": "/tmp/tmp.BfpHOxcyB0" 00:19:30.696 } 00:19:30.696 } 00:19:30.696 Got JSON-RPC error response 00:19:30.696 GoRPCClient: error on JSON-RPC call 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99873 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99873 ']' 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99873 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99873 00:19:30.696 killing process with pid 99873 00:19:30.696 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.696 00:19:30.696 Latency(us) 00:19:30.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.696 =================================================================================================================== 00:19:30.696 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99873' 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99873 00:19:30.696 [2024-07-13 07:05:38.558963] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:30.696 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99873 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BfpHOxcyB0 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BfpHOxcyB0 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BfpHOxcyB0 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BfpHOxcyB0' 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99921 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99921 /var/tmp/bdevperf.sock 00:19:30.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99921 ']' 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.955 07:05:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.955 [2024-07-13 07:05:38.874345] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:30.955 [2024-07-13 07:05:38.874456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99921 ] 00:19:30.955 [2024-07-13 07:05:39.006522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.214 [2024-07-13 07:05:39.100367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.782 07:05:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.782 07:05:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:31.782 07:05:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BfpHOxcyB0 00:19:32.041 [2024-07-13 07:05:40.061252] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.041 [2024-07-13 07:05:40.061371] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:32.041 [2024-07-13 07:05:40.071795] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:32.041 [2024-07-13 07:05:40.071855] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:32.041 [2024-07-13 07:05:40.071932] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:32.041 [2024-07-13 07:05:40.072125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7db970 (107): Transport endpoint is not connected 00:19:32.041 [2024-07-13 07:05:40.073108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7db970 (9): Bad file descriptor 00:19:32.041 [2024-07-13 07:05:40.074114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:32.041 [2024-07-13 07:05:40.074154] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:32.041 [2024-07-13 07:05:40.074168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:32.041 2024/07/13 07:05:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.BfpHOxcyB0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:32.041 request: 00:19:32.041 { 00:19:32.041 "method": "bdev_nvme_attach_controller", 00:19:32.041 "params": { 00:19:32.041 "name": "TLSTEST", 00:19:32.041 "trtype": "tcp", 00:19:32.041 "traddr": "10.0.0.2", 00:19:32.041 "adrfam": "ipv4", 00:19:32.041 "trsvcid": "4420", 00:19:32.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:32.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.041 "prchk_reftag": false, 00:19:32.041 "prchk_guard": false, 00:19:32.041 "hdgst": false, 00:19:32.041 "ddgst": false, 00:19:32.041 "psk": "/tmp/tmp.BfpHOxcyB0" 00:19:32.041 } 00:19:32.041 } 00:19:32.041 Got JSON-RPC error response 00:19:32.041 GoRPCClient: error on JSON-RPC call 00:19:32.041 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99921 00:19:32.041 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99921 ']' 00:19:32.041 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99921 00:19:32.041 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.041 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.041 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99921 00:19:32.300 killing process with pid 99921 00:19:32.300 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.300 00:19:32.300 Latency(us) 00:19:32.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.300 =================================================================================================================== 00:19:32.300 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.300 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:32.300 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:32.300 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99921' 00:19:32.300 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99921 00:19:32.300 [2024-07-13 07:05:40.125670] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:32.300 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99921 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99961 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.559 07:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99961 /var/tmp/bdevperf.sock 00:19:32.560 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 99961 ']' 00:19:32.560 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.560 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.560 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.560 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.560 07:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.560 [2024-07-13 07:05:40.440039] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:32.560 [2024-07-13 07:05:40.440131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99961 ] 00:19:32.560 [2024-07-13 07:05:40.573016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.818 [2024-07-13 07:05:40.666731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.386 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.386 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:33.386 07:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:33.645 [2024-07-13 07:05:41.612236] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.645 [2024-07-13 07:05:41.614171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a827d0 (9): Bad file descriptor 00:19:33.645 [2024-07-13 07:05:41.615165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.645 [2024-07-13 07:05:41.615201] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.645 [2024-07-13 07:05:41.615232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.645 2024/07/13 07:05:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:33.645 request: 00:19:33.645 { 00:19:33.645 "method": "bdev_nvme_attach_controller", 00:19:33.645 "params": { 00:19:33.645 "name": "TLSTEST", 00:19:33.645 "trtype": "tcp", 00:19:33.645 "traddr": "10.0.0.2", 00:19:33.645 "adrfam": "ipv4", 00:19:33.645 "trsvcid": "4420", 00:19:33.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.645 "prchk_reftag": false, 00:19:33.645 "prchk_guard": false, 00:19:33.645 "hdgst": false, 00:19:33.645 "ddgst": false 00:19:33.645 } 00:19:33.645 } 00:19:33.645 Got JSON-RPC error response 00:19:33.645 GoRPCClient: error on JSON-RPC call 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99961 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99961 ']' 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99961 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99961 00:19:33.645 killing process with pid 99961 00:19:33.645 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.645 00:19:33.645 Latency(us) 00:19:33.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.645 =================================================================================================================== 00:19:33.645 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99961' 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99961 00:19:33.645 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99961 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 99323 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 99323 ']' 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 99323 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.904 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99323 00:19:34.163 killing process with pid 99323 00:19:34.163 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:34.163 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:34.163 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99323' 00:19:34.163 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 99323 00:19:34.163 [2024-07-13 07:05:41.986666] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:34.163 07:05:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 99323 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.HHNJGClTEI 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.HHNJGClTEI 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100022 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100022 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100022 ']' 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.422 07:05:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.422 [2024-07-13 07:05:42.401656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:34.422 [2024-07-13 07:05:42.401763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.682 [2024-07-13 07:05:42.537338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.682 [2024-07-13 07:05:42.633403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.682 [2024-07-13 07:05:42.633485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.682 [2024-07-13 07:05:42.633498] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.682 [2024-07-13 07:05:42.633506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.682 [2024-07-13 07:05:42.633513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.682 [2024-07-13 07:05:42.633548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.HHNJGClTEI 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HHNJGClTEI 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.617 [2024-07-13 07:05:43.633012] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.617 07:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.876 07:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.136 [2024-07-13 07:05:44.041048] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.136 [2024-07-13 07:05:44.041362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.136 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.395 malloc0 00:19:36.395 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:19:36.655 [2024-07-13 07:05:44.688783] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHNJGClTEI 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HHNJGClTEI' 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100119 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100119 /var/tmp/bdevperf.sock 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100119 ']' 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.655 07:05:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.914 [2024-07-13 07:05:44.757208] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:36.914 [2024-07-13 07:05:44.757316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100119 ] 00:19:36.914 [2024-07-13 07:05:44.891641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.172 [2024-07-13 07:05:45.012761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.739 07:05:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.739 07:05:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.739 07:05:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:19:37.998 [2024-07-13 07:05:45.912809] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.998 [2024-07-13 07:05:45.912983] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.998 TLSTESTn1 00:19:37.998 07:05:46 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:38.257 Running I/O for 10 seconds... 00:19:48.226 00:19:48.226 Latency(us) 00:19:48.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.226 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.226 Verification LBA range: start 0x0 length 0x2000 00:19:48.226 TLSTESTn1 : 10.03 3649.90 14.26 0.00 0.00 34994.38 6166.34 35985.22 00:19:48.226 =================================================================================================================== 00:19:48.226 Total : 3649.90 14.26 0.00 0.00 34994.38 6166.34 35985.22 00:19:48.226 0 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100119 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100119 ']' 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100119 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100119 00:19:48.226 killing process with pid 100119 00:19:48.226 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.226 00:19:48.226 Latency(us) 00:19:48.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.226 =================================================================================================================== 00:19:48.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100119' 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100119 00:19:48.226 [2024-07-13 07:05:56.195219] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:48.226 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100119 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.HHNJGClTEI 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHNJGClTEI 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHNJGClTEI 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HHNJGClTEI 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HHNJGClTEI' 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100267 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100267 /var/tmp/bdevperf.sock 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100267 ']' 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.485 07:05:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.485 [2024-07-13 07:05:56.470722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:48.485 [2024-07-13 07:05:56.470844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100267 ] 00:19:48.744 [2024-07-13 07:05:56.614382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.744 [2024-07-13 07:05:56.709678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:19:49.680 [2024-07-13 07:05:57.684865] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.680 [2024-07-13 07:05:57.685002] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:49.680 [2024-07-13 07:05:57.685027] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.HHNJGClTEI 00:19:49.680 2024/07/13 07:05:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.HHNJGClTEI subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:19:49.680 request: 00:19:49.680 { 00:19:49.680 "method": "bdev_nvme_attach_controller", 00:19:49.680 "params": { 00:19:49.680 "name": "TLSTEST", 00:19:49.680 "trtype": "tcp", 00:19:49.680 "traddr": "10.0.0.2", 00:19:49.680 "adrfam": "ipv4", 00:19:49.680 "trsvcid": "4420", 00:19:49.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.680 "prchk_reftag": false, 00:19:49.680 "prchk_guard": false, 00:19:49.680 "hdgst": false, 00:19:49.680 "ddgst": false, 00:19:49.680 "psk": "/tmp/tmp.HHNJGClTEI" 00:19:49.680 } 00:19:49.680 } 00:19:49.680 Got JSON-RPC error response 00:19:49.680 GoRPCClient: error on JSON-RPC call 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100267 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100267 ']' 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100267 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.680 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100267 00:19:49.680 killing process with pid 100267 00:19:49.680 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.680 00:19:49.680 Latency(us) 00:19:49.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.681 =================================================================================================================== 00:19:49.681 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.681 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:49.681 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:49.681 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100267' 00:19:49.681 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100267 00:19:49.681 07:05:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100267 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 100022 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100022 ']' 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100022 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100022 00:19:50.248 killing process with pid 100022 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100022' 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100022 00:19:50.248 [2024-07-13 07:05:58.059581] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:50.248 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100022 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100323 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100323 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100323 ']' 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.507 07:05:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.507 [2024-07-13 07:05:58.423188] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:50.507 [2024-07-13 07:05:58.423294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.507 [2024-07-13 07:05:58.564112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.766 [2024-07-13 07:05:58.666286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.766 [2024-07-13 07:05:58.666383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.766 [2024-07-13 07:05:58.666397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.766 [2024-07-13 07:05:58.666421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.766 [2024-07-13 07:05:58.666437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.766 [2024-07-13 07:05:58.666492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.HHNJGClTEI 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.HHNJGClTEI 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.HHNJGClTEI 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HHNJGClTEI 00:19:51.334 07:05:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.592 [2024-07-13 07:05:59.638346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.592 07:05:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.850 07:05:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.109 [2024-07-13 07:06:00.134502] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.109 [2024-07-13 07:06:00.134854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.109 07:06:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.368 malloc0 00:19:52.368 07:06:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.627 07:06:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:19:52.887 [2024-07-13 07:06:00.801544] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:52.887 [2024-07-13 07:06:00.801639] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:52.887 [2024-07-13 07:06:00.801691] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:52.887 2024/07/13 07:06:00 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.HHNJGClTEI], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:19:52.887 request: 00:19:52.887 { 00:19:52.887 "method": "nvmf_subsystem_add_host", 00:19:52.887 "params": { 00:19:52.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.887 "host": "nqn.2016-06.io.spdk:host1", 00:19:52.887 "psk": "/tmp/tmp.HHNJGClTEI" 00:19:52.887 } 00:19:52.887 } 00:19:52.887 Got JSON-RPC error response 00:19:52.887 GoRPCClient: error on JSON-RPC call 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 100323 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100323 ']' 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100323 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100323 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.887 killing process with pid 100323 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100323' 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100323 00:19:52.887 07:06:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100323 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.HHNJGClTEI 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100428 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100428 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100428 ']' 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.147 07:06:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.147 [2024-07-13 07:06:01.146109] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:53.147 [2024-07-13 07:06:01.146249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.406 [2024-07-13 07:06:01.278051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.406 [2024-07-13 07:06:01.368029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.406 [2024-07-13 07:06:01.368248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.406 [2024-07-13 07:06:01.368319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.406 [2024-07-13 07:06:01.368394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.406 [2024-07-13 07:06:01.368460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.406 [2024-07-13 07:06:01.368625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.HHNJGClTEI 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HHNJGClTEI 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.344 [2024-07-13 07:06:02.376113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.344 07:06:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.603 07:06:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.862 [2024-07-13 07:06:02.820176] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.862 [2024-07-13 07:06:02.820472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.862 07:06:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.120 malloc0 00:19:55.120 07:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.379 07:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:19:55.642 [2024-07-13 07:06:03.596038] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=100531 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 100531 /var/tmp/bdevperf.sock 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100531 ']' 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.642 07:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.642 [2024-07-13 07:06:03.663375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:55.642 [2024-07-13 07:06:03.663475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100531 ] 00:19:55.919 [2024-07-13 07:06:03.793831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.919 [2024-07-13 07:06:03.906668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.872 07:06:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.872 07:06:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.872 07:06:04 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:19:56.872 [2024-07-13 07:06:04.938036] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.872 [2024-07-13 07:06:04.938214] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:57.131 TLSTESTn1 00:19:57.131 07:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:57.390 07:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:57.390 "subsystems": [ 00:19:57.390 { 00:19:57.390 "subsystem": "keyring", 00:19:57.390 "config": [] 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "subsystem": "iobuf", 00:19:57.390 "config": [ 00:19:57.390 { 00:19:57.390 "method": "iobuf_set_options", 00:19:57.390 "params": { 00:19:57.390 "large_bufsize": 135168, 00:19:57.390 "large_pool_count": 1024, 00:19:57.390 "small_bufsize": 8192, 00:19:57.390 "small_pool_count": 8192 00:19:57.390 } 00:19:57.390 } 00:19:57.390 ] 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "subsystem": "sock", 00:19:57.390 "config": [ 00:19:57.390 { 00:19:57.390 "method": "sock_set_default_impl", 00:19:57.390 "params": { 00:19:57.390 "impl_name": "posix" 00:19:57.390 } 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "method": "sock_impl_set_options", 00:19:57.390 "params": { 00:19:57.390 "enable_ktls": false, 00:19:57.390 "enable_placement_id": 0, 00:19:57.390 "enable_quickack": false, 00:19:57.390 "enable_recv_pipe": true, 00:19:57.390 "enable_zerocopy_send_client": false, 00:19:57.390 "enable_zerocopy_send_server": true, 00:19:57.390 "impl_name": "ssl", 00:19:57.390 "recv_buf_size": 4096, 00:19:57.390 "send_buf_size": 4096, 00:19:57.390 "tls_version": 0, 00:19:57.390 "zerocopy_threshold": 0 00:19:57.390 } 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "method": "sock_impl_set_options", 00:19:57.390 "params": { 00:19:57.390 "enable_ktls": false, 00:19:57.390 "enable_placement_id": 0, 00:19:57.390 "enable_quickack": false, 00:19:57.390 "enable_recv_pipe": true, 00:19:57.390 "enable_zerocopy_send_client": false, 00:19:57.390 "enable_zerocopy_send_server": true, 00:19:57.390 "impl_name": "posix", 00:19:57.390 "recv_buf_size": 2097152, 00:19:57.390 "send_buf_size": 2097152, 00:19:57.390 "tls_version": 0, 00:19:57.390 "zerocopy_threshold": 0 00:19:57.390 } 00:19:57.390 } 00:19:57.390 ] 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "subsystem": "vmd", 00:19:57.390 "config": [] 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "subsystem": "accel", 00:19:57.390 "config": [ 00:19:57.390 { 00:19:57.390 "method": "accel_set_options", 00:19:57.390 "params": { 00:19:57.390 "buf_count": 2048, 00:19:57.390 "large_cache_size": 16, 00:19:57.390 "sequence_count": 2048, 00:19:57.390 "small_cache_size": 128, 00:19:57.390 "task_count": 2048 00:19:57.390 } 00:19:57.390 } 00:19:57.390 ] 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "subsystem": "bdev", 00:19:57.390 "config": [ 00:19:57.390 { 00:19:57.390 "method": "bdev_set_options", 00:19:57.390 "params": { 00:19:57.390 "bdev_auto_examine": true, 00:19:57.390 "bdev_io_cache_size": 256, 00:19:57.390 "bdev_io_pool_size": 65535, 00:19:57.390 "iobuf_large_cache_size": 16, 00:19:57.390 "iobuf_small_cache_size": 128 00:19:57.390 } 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "method": "bdev_raid_set_options", 00:19:57.390 "params": { 00:19:57.390 "process_window_size_kb": 1024 00:19:57.390 } 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "method": "bdev_iscsi_set_options", 00:19:57.390 "params": { 00:19:57.390 "timeout_sec": 30 00:19:57.390 } 00:19:57.390 }, 00:19:57.390 { 00:19:57.390 "method": "bdev_nvme_set_options", 00:19:57.390 "params": { 00:19:57.390 "action_on_timeout": "none", 00:19:57.390 "allow_accel_sequence": false, 00:19:57.390 "arbitration_burst": 0, 00:19:57.390 "bdev_retry_count": 3, 00:19:57.390 "ctrlr_loss_timeout_sec": 0, 00:19:57.390 "delay_cmd_submit": true, 00:19:57.390 "dhchap_dhgroups": [ 00:19:57.390 "null", 00:19:57.390 "ffdhe2048", 00:19:57.390 "ffdhe3072", 00:19:57.390 "ffdhe4096", 00:19:57.390 "ffdhe6144", 00:19:57.390 "ffdhe8192" 00:19:57.390 ], 00:19:57.390 "dhchap_digests": [ 00:19:57.390 "sha256", 00:19:57.390 "sha384", 00:19:57.390 "sha512" 00:19:57.390 ], 00:19:57.390 "disable_auto_failback": false, 00:19:57.390 "fast_io_fail_timeout_sec": 0, 00:19:57.390 "generate_uuids": false, 00:19:57.390 "high_priority_weight": 0, 00:19:57.390 "io_path_stat": false, 00:19:57.390 "io_queue_requests": 0, 00:19:57.391 "keep_alive_timeout_ms": 10000, 00:19:57.391 "low_priority_weight": 0, 00:19:57.391 "medium_priority_weight": 0, 00:19:57.391 "nvme_adminq_poll_period_us": 10000, 00:19:57.391 "nvme_error_stat": false, 00:19:57.391 "nvme_ioq_poll_period_us": 0, 00:19:57.391 "rdma_cm_event_timeout_ms": 0, 00:19:57.391 "rdma_max_cq_size": 0, 00:19:57.391 "rdma_srq_size": 0, 00:19:57.391 "reconnect_delay_sec": 0, 00:19:57.391 "timeout_admin_us": 0, 00:19:57.391 "timeout_us": 0, 00:19:57.391 "transport_ack_timeout": 0, 00:19:57.391 "transport_retry_count": 4, 00:19:57.391 "transport_tos": 0 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "bdev_nvme_set_hotplug", 00:19:57.391 "params": { 00:19:57.391 "enable": false, 00:19:57.391 "period_us": 100000 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "bdev_malloc_create", 00:19:57.391 "params": { 00:19:57.391 "block_size": 4096, 00:19:57.391 "name": "malloc0", 00:19:57.391 "num_blocks": 8192, 00:19:57.391 "optimal_io_boundary": 0, 00:19:57.391 "physical_block_size": 4096, 00:19:57.391 "uuid": "19d939df-5bc4-45bf-a250-45137a87486f" 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "bdev_wait_for_examine" 00:19:57.391 } 00:19:57.391 ] 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "subsystem": "nbd", 00:19:57.391 "config": [] 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "subsystem": "scheduler", 00:19:57.391 "config": [ 00:19:57.391 { 00:19:57.391 "method": "framework_set_scheduler", 00:19:57.391 "params": { 00:19:57.391 "name": "static" 00:19:57.391 } 00:19:57.391 } 00:19:57.391 ] 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "subsystem": "nvmf", 00:19:57.391 "config": [ 00:19:57.391 { 00:19:57.391 "method": "nvmf_set_config", 00:19:57.391 "params": { 00:19:57.391 "admin_cmd_passthru": { 00:19:57.391 "identify_ctrlr": false 00:19:57.391 }, 00:19:57.391 "discovery_filter": "match_any" 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_set_max_subsystems", 00:19:57.391 "params": { 00:19:57.391 "max_subsystems": 1024 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_set_crdt", 00:19:57.391 "params": { 00:19:57.391 "crdt1": 0, 00:19:57.391 "crdt2": 0, 00:19:57.391 "crdt3": 0 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_create_transport", 00:19:57.391 "params": { 00:19:57.391 "abort_timeout_sec": 1, 00:19:57.391 "ack_timeout": 0, 00:19:57.391 "buf_cache_size": 4294967295, 00:19:57.391 "c2h_success": false, 00:19:57.391 "data_wr_pool_size": 0, 00:19:57.391 "dif_insert_or_strip": false, 00:19:57.391 "in_capsule_data_size": 4096, 00:19:57.391 "io_unit_size": 131072, 00:19:57.391 "max_aq_depth": 128, 00:19:57.391 "max_io_qpairs_per_ctrlr": 127, 00:19:57.391 "max_io_size": 131072, 00:19:57.391 "max_queue_depth": 128, 00:19:57.391 "num_shared_buffers": 511, 00:19:57.391 "sock_priority": 0, 00:19:57.391 "trtype": "TCP", 00:19:57.391 "zcopy": false 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_create_subsystem", 00:19:57.391 "params": { 00:19:57.391 "allow_any_host": false, 00:19:57.391 "ana_reporting": false, 00:19:57.391 "max_cntlid": 65519, 00:19:57.391 "max_namespaces": 10, 00:19:57.391 "min_cntlid": 1, 00:19:57.391 "model_number": "SPDK bdev Controller", 00:19:57.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.391 "serial_number": "SPDK00000000000001" 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_subsystem_add_host", 00:19:57.391 "params": { 00:19:57.391 "host": "nqn.2016-06.io.spdk:host1", 00:19:57.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.391 "psk": "/tmp/tmp.HHNJGClTEI" 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_subsystem_add_ns", 00:19:57.391 "params": { 00:19:57.391 "namespace": { 00:19:57.391 "bdev_name": "malloc0", 00:19:57.391 "nguid": "19D939DF5BC445BFA25045137A87486F", 00:19:57.391 "no_auto_visible": false, 00:19:57.391 "nsid": 1, 00:19:57.391 "uuid": "19d939df-5bc4-45bf-a250-45137a87486f" 00:19:57.391 }, 00:19:57.391 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:57.391 } 00:19:57.391 }, 00:19:57.391 { 00:19:57.391 "method": "nvmf_subsystem_add_listener", 00:19:57.391 "params": { 00:19:57.391 "listen_address": { 00:19:57.391 "adrfam": "IPv4", 00:19:57.391 "traddr": "10.0.0.2", 00:19:57.391 "trsvcid": "4420", 00:19:57.391 "trtype": "TCP" 00:19:57.391 }, 00:19:57.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.391 "secure_channel": true 00:19:57.391 } 00:19:57.391 } 00:19:57.391 ] 00:19:57.391 } 00:19:57.391 ] 00:19:57.391 }' 00:19:57.391 07:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:57.650 07:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:57.650 "subsystems": [ 00:19:57.650 { 00:19:57.650 "subsystem": "keyring", 00:19:57.650 "config": [] 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "subsystem": "iobuf", 00:19:57.650 "config": [ 00:19:57.650 { 00:19:57.650 "method": "iobuf_set_options", 00:19:57.650 "params": { 00:19:57.650 "large_bufsize": 135168, 00:19:57.650 "large_pool_count": 1024, 00:19:57.650 "small_bufsize": 8192, 00:19:57.650 "small_pool_count": 8192 00:19:57.650 } 00:19:57.650 } 00:19:57.650 ] 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "subsystem": "sock", 00:19:57.650 "config": [ 00:19:57.650 { 00:19:57.650 "method": "sock_set_default_impl", 00:19:57.650 "params": { 00:19:57.650 "impl_name": "posix" 00:19:57.650 } 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "method": "sock_impl_set_options", 00:19:57.650 "params": { 00:19:57.650 "enable_ktls": false, 00:19:57.650 "enable_placement_id": 0, 00:19:57.650 "enable_quickack": false, 00:19:57.650 "enable_recv_pipe": true, 00:19:57.650 "enable_zerocopy_send_client": false, 00:19:57.650 "enable_zerocopy_send_server": true, 00:19:57.650 "impl_name": "ssl", 00:19:57.650 "recv_buf_size": 4096, 00:19:57.650 "send_buf_size": 4096, 00:19:57.650 "tls_version": 0, 00:19:57.650 "zerocopy_threshold": 0 00:19:57.650 } 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "method": "sock_impl_set_options", 00:19:57.650 "params": { 00:19:57.650 "enable_ktls": false, 00:19:57.650 "enable_placement_id": 0, 00:19:57.650 "enable_quickack": false, 00:19:57.650 "enable_recv_pipe": true, 00:19:57.650 "enable_zerocopy_send_client": false, 00:19:57.650 "enable_zerocopy_send_server": true, 00:19:57.650 "impl_name": "posix", 00:19:57.650 "recv_buf_size": 2097152, 00:19:57.650 "send_buf_size": 2097152, 00:19:57.650 "tls_version": 0, 00:19:57.650 "zerocopy_threshold": 0 00:19:57.650 } 00:19:57.650 } 00:19:57.650 ] 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "subsystem": "vmd", 00:19:57.650 "config": [] 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "subsystem": "accel", 00:19:57.650 "config": [ 00:19:57.650 { 00:19:57.650 "method": "accel_set_options", 00:19:57.650 "params": { 00:19:57.650 "buf_count": 2048, 00:19:57.650 "large_cache_size": 16, 00:19:57.650 "sequence_count": 2048, 00:19:57.650 "small_cache_size": 128, 00:19:57.650 "task_count": 2048 00:19:57.650 } 00:19:57.650 } 00:19:57.650 ] 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "subsystem": "bdev", 00:19:57.650 "config": [ 00:19:57.650 { 00:19:57.650 "method": "bdev_set_options", 00:19:57.650 "params": { 00:19:57.650 "bdev_auto_examine": true, 00:19:57.650 "bdev_io_cache_size": 256, 00:19:57.650 "bdev_io_pool_size": 65535, 00:19:57.650 "iobuf_large_cache_size": 16, 00:19:57.650 "iobuf_small_cache_size": 128 00:19:57.650 } 00:19:57.650 }, 00:19:57.650 { 00:19:57.650 "method": "bdev_raid_set_options", 00:19:57.650 "params": { 00:19:57.650 "process_window_size_kb": 1024 00:19:57.650 } 00:19:57.651 }, 00:19:57.651 { 00:19:57.651 "method": "bdev_iscsi_set_options", 00:19:57.651 "params": { 00:19:57.651 "timeout_sec": 30 00:19:57.651 } 00:19:57.651 }, 00:19:57.651 { 00:19:57.651 "method": "bdev_nvme_set_options", 00:19:57.651 "params": { 00:19:57.651 "action_on_timeout": "none", 00:19:57.651 "allow_accel_sequence": false, 00:19:57.651 "arbitration_burst": 0, 00:19:57.651 "bdev_retry_count": 3, 00:19:57.651 "ctrlr_loss_timeout_sec": 0, 00:19:57.651 "delay_cmd_submit": true, 00:19:57.651 "dhchap_dhgroups": [ 00:19:57.651 "null", 00:19:57.651 "ffdhe2048", 00:19:57.651 "ffdhe3072", 00:19:57.651 "ffdhe4096", 00:19:57.651 "ffdhe6144", 00:19:57.651 "ffdhe8192" 00:19:57.651 ], 00:19:57.651 "dhchap_digests": [ 00:19:57.651 "sha256", 00:19:57.651 "sha384", 00:19:57.651 "sha512" 00:19:57.651 ], 00:19:57.651 "disable_auto_failback": false, 00:19:57.651 "fast_io_fail_timeout_sec": 0, 00:19:57.651 "generate_uuids": false, 00:19:57.651 "high_priority_weight": 0, 00:19:57.651 "io_path_stat": false, 00:19:57.651 "io_queue_requests": 512, 00:19:57.651 "keep_alive_timeout_ms": 10000, 00:19:57.651 "low_priority_weight": 0, 00:19:57.651 "medium_priority_weight": 0, 00:19:57.651 "nvme_adminq_poll_period_us": 10000, 00:19:57.651 "nvme_error_stat": false, 00:19:57.651 "nvme_ioq_poll_period_us": 0, 00:19:57.651 "rdma_cm_event_timeout_ms": 0, 00:19:57.651 "rdma_max_cq_size": 0, 00:19:57.651 "rdma_srq_size": 0, 00:19:57.651 "reconnect_delay_sec": 0, 00:19:57.651 "timeout_admin_us": 0, 00:19:57.651 "timeout_us": 0, 00:19:57.651 "transport_ack_timeout": 0, 00:19:57.651 "transport_retry_count": 4, 00:19:57.651 "transport_tos": 0 00:19:57.651 } 00:19:57.651 }, 00:19:57.651 { 00:19:57.651 "method": "bdev_nvme_attach_controller", 00:19:57.651 "params": { 00:19:57.651 "adrfam": "IPv4", 00:19:57.651 "ctrlr_loss_timeout_sec": 0, 00:19:57.651 "ddgst": false, 00:19:57.651 "fast_io_fail_timeout_sec": 0, 00:19:57.651 "hdgst": false, 00:19:57.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.651 "name": "TLSTEST", 00:19:57.651 "prchk_guard": false, 00:19:57.651 "prchk_reftag": false, 00:19:57.651 "psk": "/tmp/tmp.HHNJGClTEI", 00:19:57.651 "reconnect_delay_sec": 0, 00:19:57.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.651 "traddr": "10.0.0.2", 00:19:57.651 "trsvcid": "4420", 00:19:57.651 "trtype": "TCP" 00:19:57.651 } 00:19:57.651 }, 00:19:57.651 { 00:19:57.651 "method": "bdev_nvme_set_hotplug", 00:19:57.651 "params": { 00:19:57.651 "enable": false, 00:19:57.651 "period_us": 100000 00:19:57.651 } 00:19:57.651 }, 00:19:57.651 { 00:19:57.651 "method": "bdev_wait_for_examine" 00:19:57.651 } 00:19:57.651 ] 00:19:57.651 }, 00:19:57.651 { 00:19:57.651 "subsystem": "nbd", 00:19:57.651 "config": [] 00:19:57.651 } 00:19:57.651 ] 00:19:57.651 }' 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 100531 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100531 ']' 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100531 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100531 00:19:57.651 killing process with pid 100531 00:19:57.651 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.651 00:19:57.651 Latency(us) 00:19:57.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.651 =================================================================================================================== 00:19:57.651 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100531' 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100531 00:19:57.651 [2024-07-13 07:06:05.703062] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:57.651 07:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100531 00:19:58.218 07:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 100428 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100428 ']' 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100428 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100428 00:19:58.218 killing process with pid 100428 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100428' 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100428 00:19:58.218 [2024-07-13 07:06:06.032272] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100428 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:58.218 07:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:58.218 "subsystems": [ 00:19:58.218 { 00:19:58.218 "subsystem": "keyring", 00:19:58.218 "config": [] 00:19:58.218 }, 00:19:58.218 { 00:19:58.218 "subsystem": "iobuf", 00:19:58.218 "config": [ 00:19:58.218 { 00:19:58.218 "method": "iobuf_set_options", 00:19:58.218 "params": { 00:19:58.218 "large_bufsize": 135168, 00:19:58.218 "large_pool_count": 1024, 00:19:58.218 "small_bufsize": 8192, 00:19:58.218 "small_pool_count": 8192 00:19:58.219 } 00:19:58.219 } 00:19:58.219 ] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "sock", 00:19:58.219 "config": [ 00:19:58.219 { 00:19:58.219 "method": "sock_set_default_impl", 00:19:58.219 "params": { 00:19:58.219 "impl_name": "posix" 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "sock_impl_set_options", 00:19:58.219 "params": { 00:19:58.219 "enable_ktls": false, 00:19:58.219 "enable_placement_id": 0, 00:19:58.219 "enable_quickack": false, 00:19:58.219 "enable_recv_pipe": true, 00:19:58.219 "enable_zerocopy_send_client": false, 00:19:58.219 "enable_zerocopy_send_server": true, 00:19:58.219 "impl_name": "ssl", 00:19:58.219 "recv_buf_size": 4096, 00:19:58.219 "send_buf_size": 4096, 00:19:58.219 "tls_version": 0, 00:19:58.219 "zerocopy_threshold": 0 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "sock_impl_set_options", 00:19:58.219 "params": { 00:19:58.219 "enable_ktls": false, 00:19:58.219 "enable_placement_id": 0, 00:19:58.219 "enable_quickack": false, 00:19:58.219 "enable_recv_pipe": true, 00:19:58.219 "enable_zerocopy_send_client": false, 00:19:58.219 "enable_zerocopy_send_server": true, 00:19:58.219 "impl_name": "posix", 00:19:58.219 "recv_buf_size": 2097152, 00:19:58.219 "send_buf_size": 2097152, 00:19:58.219 "tls_version": 0, 00:19:58.219 "zerocopy_threshold": 0 00:19:58.219 } 00:19:58.219 } 00:19:58.219 ] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "vmd", 00:19:58.219 "config": [] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "accel", 00:19:58.219 "config": [ 00:19:58.219 { 00:19:58.219 "method": "accel_set_options", 00:19:58.219 "params": { 00:19:58.219 "buf_count": 2048, 00:19:58.219 "large_cache_size": 16, 00:19:58.219 "sequence_count": 2048, 00:19:58.219 "small_cache_size": 128, 00:19:58.219 "task_count": 2048 00:19:58.219 } 00:19:58.219 } 00:19:58.219 ] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "bdev", 00:19:58.219 "config": [ 00:19:58.219 { 00:19:58.219 "method": "bdev_set_options", 00:19:58.219 "params": { 00:19:58.219 "bdev_auto_examine": true, 00:19:58.219 "bdev_io_cache_size": 256, 00:19:58.219 "bdev_io_pool_size": 65535, 00:19:58.219 "iobuf_large_cache_size": 16, 00:19:58.219 "iobuf_small_cache_size": 128 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "bdev_raid_set_options", 00:19:58.219 "params": { 00:19:58.219 "process_window_size_kb": 1024 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "bdev_iscsi_set_options", 00:19:58.219 "params": { 00:19:58.219 "timeout_sec": 30 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "bdev_nvme_set_options", 00:19:58.219 "params": { 00:19:58.219 "action_on_timeout": "none", 00:19:58.219 "allow_accel_sequence": false, 00:19:58.219 "arbitration_burst": 0, 00:19:58.219 "bdev_retry_count": 3, 00:19:58.219 "ctrlr_loss_timeout_sec": 0, 00:19:58.219 "delay_cmd_submit": true, 00:19:58.219 "dhchap_dhgroups": [ 00:19:58.219 "null", 00:19:58.219 "ffdhe2048", 00:19:58.219 "ffdhe3072", 00:19:58.219 "ffdhe4096", 00:19:58.219 "ffdhe6144", 00:19:58.219 "ffdhe8192" 00:19:58.219 ], 00:19:58.219 "dhchap_digests": [ 00:19:58.219 "sha256", 00:19:58.219 "sha384", 00:19:58.219 "sha512" 00:19:58.219 ], 00:19:58.219 "disable_auto_failback": false, 00:19:58.219 "fast_io_fail_timeout_sec": 0, 00:19:58.219 "generate_uuids": false, 00:19:58.219 "high_priority_weight": 0, 00:19:58.219 "io_path_stat": false, 00:19:58.219 "io_queue_requests": 0, 00:19:58.219 "keep_alive_timeout_ms": 10000, 00:19:58.219 "low_priority_weight": 0, 00:19:58.219 "medium_priority_weight": 0, 00:19:58.219 "nvme_adminq_poll_period_us": 10000, 00:19:58.219 "nvme_error_stat": false, 00:19:58.219 "nvme_ioq_poll_period_us": 0, 00:19:58.219 "rdma_cm_event_timeout_ms": 0, 00:19:58.219 "rdma_max_cq_size": 0, 00:19:58.219 "rdma_srq_size": 0, 00:19:58.219 "reconnect_delay_sec": 0, 00:19:58.219 "timeout_admin_us": 0, 00:19:58.219 "timeout_us": 0, 00:19:58.219 "transport_ack_timeout": 0, 00:19:58.219 "transport_retry_count": 4, 00:19:58.219 "transport_tos": 0 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "bdev_nvme_set_hotplug", 00:19:58.219 "params": { 00:19:58.219 "enable": false, 00:19:58.219 "period_us": 100000 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "bdev_malloc_create", 00:19:58.219 "params": { 00:19:58.219 "block_size": 4096, 00:19:58.219 "name": "malloc0", 00:19:58.219 "num_blocks": 8192, 00:19:58.219 "optimal_io_boundary": 0, 00:19:58.219 "physical_block_size": 4096, 00:19:58.219 "uuid": "19d939df-5bc4-45bf-a250-45137a87486f" 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "bdev_wait_for_examine" 00:19:58.219 } 00:19:58.219 ] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "nbd", 00:19:58.219 "config": [] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "scheduler", 00:19:58.219 "config": [ 00:19:58.219 { 00:19:58.219 "method": "framework_set_scheduler", 00:19:58.219 "params": { 00:19:58.219 "name": "static" 00:19:58.219 } 00:19:58.219 } 00:19:58.219 ] 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "subsystem": "nvmf", 00:19:58.219 "config": [ 00:19:58.219 { 00:19:58.219 "method": "nvmf_set_config", 00:19:58.219 "params": { 00:19:58.219 "admin_cmd_passthru": { 00:19:58.219 "identify_ctrlr": false 00:19:58.219 }, 00:19:58.219 "discovery_filter": "match_any" 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "nvmf_set_max_subsystems", 00:19:58.219 "params": { 00:19:58.219 "max_subsystems": 1024 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "nvmf_set_crdt", 00:19:58.219 "params": { 00:19:58.219 "crdt1": 0, 00:19:58.219 "crdt2": 0, 00:19:58.219 "crdt3": 0 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "nvmf_create_transport", 00:19:58.219 "params": { 00:19:58.219 "abort_timeout_sec": 1, 00:19:58.219 "ack_timeout": 0, 00:19:58.219 "buf_cache_size": 4294967295, 00:19:58.219 "c2h_success": false, 00:19:58.219 "data_wr_pool_size": 0, 00:19:58.219 "dif_insert_or_strip": false, 00:19:58.219 "in_capsule_data_size": 4096, 00:19:58.219 "io_unit_size": 131072, 00:19:58.219 "max_aq_depth": 128, 00:19:58.219 "max_io_qpairs_per_ctrlr": 127, 00:19:58.219 "max_io_size": 131072, 00:19:58.219 "max_queue_depth": 128, 00:19:58.219 "num_shared_buffers": 511, 00:19:58.219 "sock_priority": 0, 00:19:58.219 "trtype": "TCP", 00:19:58.219 "zcopy": false 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "nvmf_create_subsystem", 00:19:58.219 "params": { 00:19:58.219 "allow_any_host": false, 00:19:58.219 "ana_reporting": false, 00:19:58.219 "max_cntlid": 65519, 00:19:58.219 "max_namespaces": 10, 00:19:58.219 "min_cntlid": 1, 00:19:58.219 "model_number": "SPDK bdev Controller", 00:19:58.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.219 "serial_number": "SPDK00000000000001" 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "nvmf_subsystem_add_host", 00:19:58.219 "params": { 00:19:58.219 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.219 "psk": "/tmp/tmp.HHNJGClTEI" 00:19:58.219 } 00:19:58.219 }, 00:19:58.219 { 00:19:58.219 "method": "nvmf_subsystem_add_ns", 00:19:58.219 "params": { 00:19:58.219 "namespace": { 00:19:58.219 "bdev_name": "malloc0", 00:19:58.219 "nguid": "19D939DF5BC445BFA25045137A87486F", 00:19:58.219 "no_auto_visible": false, 00:19:58.219 "nsid": 1, 00:19:58.219 "uuid": "19d939df-5bc4-45bf-a250-45137a87486f" 00:19:58.219 }, 00:19:58.220 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:58.220 } 00:19:58.220 }, 00:19:58.220 { 00:19:58.220 "method": "nvmf_subsystem_add_listener", 00:19:58.220 "params": { 00:19:58.220 "listen_address": { 00:19:58.220 "adrfam": "IPv4", 00:19:58.220 "traddr": "10.0.0.2", 00:19:58.220 "trsvcid": "4420", 00:19:58.220 "trtype": "TCP" 00:19:58.220 }, 00:19:58.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.220 "secure_channel": true 00:19:58.220 } 00:19:58.220 } 00:19:58.220 ] 00:19:58.220 } 00:19:58.220 ] 00:19:58.220 }' 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100608 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100608 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100608 ']' 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.220 07:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.478 [2024-07-13 07:06:06.315382] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:58.478 [2024-07-13 07:06:06.315489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.478 [2024-07-13 07:06:06.455510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.478 [2024-07-13 07:06:06.538111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.478 [2024-07-13 07:06:06.538203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.478 [2024-07-13 07:06:06.538231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.478 [2024-07-13 07:06:06.538239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.478 [2024-07-13 07:06:06.538246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.478 [2024-07-13 07:06:06.538339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.737 [2024-07-13 07:06:06.760791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.737 [2024-07-13 07:06:06.776740] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:58.737 [2024-07-13 07:06:06.792721] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.737 [2024-07-13 07:06:06.792935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=100648 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 100648 /var/tmp/bdevperf.sock 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100648 ']' 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:59.303 07:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:59.303 "subsystems": [ 00:19:59.303 { 00:19:59.303 "subsystem": "keyring", 00:19:59.304 "config": [] 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "subsystem": "iobuf", 00:19:59.304 "config": [ 00:19:59.304 { 00:19:59.304 "method": "iobuf_set_options", 00:19:59.304 "params": { 00:19:59.304 "large_bufsize": 135168, 00:19:59.304 "large_pool_count": 1024, 00:19:59.304 "small_bufsize": 8192, 00:19:59.304 "small_pool_count": 8192 00:19:59.304 } 00:19:59.304 } 00:19:59.304 ] 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "subsystem": "sock", 00:19:59.304 "config": [ 00:19:59.304 { 00:19:59.304 "method": "sock_set_default_impl", 00:19:59.304 "params": { 00:19:59.304 "impl_name": "posix" 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "sock_impl_set_options", 00:19:59.304 "params": { 00:19:59.304 "enable_ktls": false, 00:19:59.304 "enable_placement_id": 0, 00:19:59.304 "enable_quickack": false, 00:19:59.304 "enable_recv_pipe": true, 00:19:59.304 "enable_zerocopy_send_client": false, 00:19:59.304 "enable_zerocopy_send_server": true, 00:19:59.304 "impl_name": "ssl", 00:19:59.304 "recv_buf_size": 4096, 00:19:59.304 "send_buf_size": 4096, 00:19:59.304 "tls_version": 0, 00:19:59.304 "zerocopy_threshold": 0 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "sock_impl_set_options", 00:19:59.304 "params": { 00:19:59.304 "enable_ktls": false, 00:19:59.304 "enable_placement_id": 0, 00:19:59.304 "enable_quickack": false, 00:19:59.304 "enable_recv_pipe": true, 00:19:59.304 "enable_zerocopy_send_client": false, 00:19:59.304 "enable_zerocopy_send_server": true, 00:19:59.304 "impl_name": "posix", 00:19:59.304 "recv_buf_size": 2097152, 00:19:59.304 "send_buf_size": 2097152, 00:19:59.304 "tls_version": 0, 00:19:59.304 "zerocopy_threshold": 0 00:19:59.304 } 00:19:59.304 } 00:19:59.304 ] 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "subsystem": "vmd", 00:19:59.304 "config": [] 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "subsystem": "accel", 00:19:59.304 "config": [ 00:19:59.304 { 00:19:59.304 "method": "accel_set_options", 00:19:59.304 "params": { 00:19:59.304 "buf_count": 2048, 00:19:59.304 "large_cache_size": 16, 00:19:59.304 "sequence_count": 2048, 00:19:59.304 "small_cache_size": 128, 00:19:59.304 "task_count": 2048 00:19:59.304 } 00:19:59.304 } 00:19:59.304 ] 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "subsystem": "bdev", 00:19:59.304 "config": [ 00:19:59.304 { 00:19:59.304 "method": "bdev_set_options", 00:19:59.304 "params": { 00:19:59.304 "bdev_auto_examine": true, 00:19:59.304 "bdev_io_cache_size": 256, 00:19:59.304 "bdev_io_pool_size": 65535, 00:19:59.304 "iobuf_large_cache_size": 16, 00:19:59.304 "iobuf_small_cache_size": 128 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "bdev_raid_set_options", 00:19:59.304 "params": { 00:19:59.304 "process_window_size_kb": 1024 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "bdev_iscsi_set_options", 00:19:59.304 "params": { 00:19:59.304 "timeout_sec": 30 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "bdev_nvme_set_options", 00:19:59.304 "params": { 00:19:59.304 "action_on_timeout": "none", 00:19:59.304 "allow_accel_sequence": false, 00:19:59.304 "arbitration_burst": 0, 00:19:59.304 "bdev_retry_count": 3, 00:19:59.304 "ctrlr_loss_timeout_sec": 0, 00:19:59.304 "delay_cmd_submit": true, 00:19:59.304 "dhchap_dhgroups": [ 00:19:59.304 "null", 00:19:59.304 "ffdhe2048", 00:19:59.304 "ffdhe3072", 00:19:59.304 "ffdhe4096", 00:19:59.304 "ffdhe6144", 00:19:59.304 "ffdhe8192" 00:19:59.304 ], 00:19:59.304 "dhchap_digests": [ 00:19:59.304 "sha256", 00:19:59.304 "sha384", 00:19:59.304 "sha512" 00:19:59.304 ], 00:19:59.304 "disable_auto_failback": false, 00:19:59.304 "fast_io_fail_timeout_sec": 0, 00:19:59.304 "generate_uuids": false, 00:19:59.304 "high_priority_weight": 0, 00:19:59.304 "io_path_stat": false, 00:19:59.304 "io_queue_requests": 512, 00:19:59.304 "keep_alive_timeout_ms": 10000, 00:19:59.304 "low_priority_weight": 0, 00:19:59.304 "medium_priority_weight": 0, 00:19:59.304 "nvme_adminq_poll_period_us": 10000, 00:19:59.304 "nvme_error_stat": false, 00:19:59.304 "nvme_ioq_poll_period_us": 0, 00:19:59.304 "rdma_cm_event_timeout_ms": 0, 00:19:59.304 "rdma_max_cq_size": 0, 00:19:59.304 "rdma_srq_size": 0, 00:19:59.304 "reconnect_delay_sec": 0, 00:19:59.304 "timeout_admin_us": 0, 00:19:59.304 "timeout_us": 0, 00:19:59.304 "transport_ack_timeout": 0, 00:19:59.304 "transport_retry_count": 4, 00:19:59.304 "transport_tos": 0 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "bdev_nvme_attach_controller", 00:19:59.304 "params": { 00:19:59.304 "adrfam": "IPv4", 00:19:59.304 "ctrlr_loss_timeout_sec": 0, 00:19:59.304 "ddgst": false, 00:19:59.304 "fast_io_fail_timeout_sec": 0, 00:19:59.304 "hdgst": false, 00:19:59.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.304 "name": "TLSTEST", 00:19:59.304 "prchk_guard": false, 00:19:59.304 "prchk_reftag": false, 00:19:59.304 "psk": "/tmp/tmp.HHNJGClTEI", 00:19:59.304 "reconnect_delay_sec": 0, 00:19:59.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.304 "traddr": "10.0.0.2", 00:19:59.304 "trsvcid": "4420", 00:19:59.304 "trtype": "TCP" 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "bdev_nvme_set_hotplug", 00:19:59.304 "params": { 00:19:59.304 "enable": false, 00:19:59.304 "period_us": 100000 00:19:59.304 } 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "method": "bdev_wait_for_examine" 00:19:59.304 } 00:19:59.304 ] 00:19:59.304 }, 00:19:59.304 { 00:19:59.304 "subsystem": "nbd", 00:19:59.304 "config": [] 00:19:59.304 } 00:19:59.304 ] 00:19:59.304 }' 00:19:59.304 [2024-07-13 07:06:07.304134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:59.304 [2024-07-13 07:06:07.304770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100648 ] 00:19:59.563 [2024-07-13 07:06:07.437016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.563 [2024-07-13 07:06:07.548833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.821 [2024-07-13 07:06:07.737690] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.821 [2024-07-13 07:06:07.737833] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:00.385 07:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.385 07:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:00.385 07:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:00.385 Running I/O for 10 seconds... 00:20:10.355 00:20:10.355 Latency(us) 00:20:10.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.355 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.355 Verification LBA range: start 0x0 length 0x2000 00:20:10.355 TLSTESTn1 : 10.02 4256.49 16.63 0.00 0.00 30010.67 2338.44 20971.52 00:20:10.355 =================================================================================================================== 00:20:10.355 Total : 4256.49 16.63 0.00 0.00 30010.67 2338.44 20971.52 00:20:10.355 0 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 100648 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100648 ']' 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100648 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100648 00:20:10.355 killing process with pid 100648 00:20:10.355 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.355 00:20:10.355 Latency(us) 00:20:10.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.355 =================================================================================================================== 00:20:10.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100648' 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100648 00:20:10.355 [2024-07-13 07:06:18.418301] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:10.355 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100648 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 100608 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100608 ']' 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100608 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100608 00:20:10.923 killing process with pid 100608 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100608' 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100608 00:20:10.923 [2024-07-13 07:06:18.724044] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100608 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100800 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100800 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100800 ']' 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.923 07:06:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.923 [2024-07-13 07:06:18.995760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:10.924 [2024-07-13 07:06:18.995867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.182 [2024-07-13 07:06:19.139052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.182 [2024-07-13 07:06:19.223214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.182 [2024-07-13 07:06:19.223516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.182 [2024-07-13 07:06:19.223700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.182 [2024-07-13 07:06:19.223805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.182 [2024-07-13 07:06:19.223884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.182 [2024-07-13 07:06:19.224002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.HHNJGClTEI 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HHNJGClTEI 00:20:12.119 07:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.378 [2024-07-13 07:06:20.194500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.378 07:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.636 07:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.636 [2024-07-13 07:06:20.674622] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.636 [2024-07-13 07:06:20.674851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.636 07:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.894 malloc0 00:20:12.894 07:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.152 07:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HHNJGClTEI 00:20:13.411 [2024-07-13 07:06:21.370395] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=100897 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 100897 /var/tmp/bdevperf.sock 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100897 ']' 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.411 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.411 [2024-07-13 07:06:21.432108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:13.411 [2024-07-13 07:06:21.432210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100897 ] 00:20:13.669 [2024-07-13 07:06:21.564618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.669 [2024-07-13 07:06:21.641463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.927 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.927 07:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:13.927 07:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HHNJGClTEI 00:20:14.186 07:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:14.186 [2024-07-13 07:06:22.243548] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.444 nvme0n1 00:20:14.444 07:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.444 Running I/O for 1 seconds... 00:20:15.816 00:20:15.816 Latency(us) 00:20:15.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.816 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.816 Verification LBA range: start 0x0 length 0x2000 00:20:15.816 nvme0n1 : 1.02 4037.14 15.77 0.00 0.00 31316.26 1832.03 19303.33 00:20:15.816 =================================================================================================================== 00:20:15.816 Total : 4037.14 15.77 0.00 0.00 31316.26 1832.03 19303.33 00:20:15.816 0 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 100897 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100897 ']' 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100897 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100897 00:20:15.816 killing process with pid 100897 00:20:15.816 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.816 00:20:15.816 Latency(us) 00:20:15.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.816 =================================================================================================================== 00:20:15.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100897' 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100897 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100897 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 100800 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100800 ']' 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100800 00:20:15.816 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100800 00:20:15.817 killing process with pid 100800 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100800' 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100800 00:20:15.817 [2024-07-13 07:06:23.740690] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:15.817 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100800 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100960 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100960 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 100960 ']' 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.075 07:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.075 [2024-07-13 07:06:24.023072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:16.075 [2024-07-13 07:06:24.023185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.334 [2024-07-13 07:06:24.163455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.334 [2024-07-13 07:06:24.246779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.334 [2024-07-13 07:06:24.246831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.334 [2024-07-13 07:06:24.246843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.334 [2024-07-13 07:06:24.246851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.334 [2024-07-13 07:06:24.246858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.334 [2024-07-13 07:06:24.246882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 [2024-07-13 07:06:25.104186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.271 malloc0 00:20:17.271 [2024-07-13 07:06:25.135664] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.271 [2024-07-13 07:06:25.135929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=101010 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 101010 /var/tmp/bdevperf.sock 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101010 ']' 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.271 07:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.271 [2024-07-13 07:06:25.222639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:17.271 [2024-07-13 07:06:25.222755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101010 ] 00:20:17.530 [2024-07-13 07:06:25.367346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.530 [2024-07-13 07:06:25.452143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.465 07:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.465 07:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:18.465 07:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HHNJGClTEI 00:20:18.465 07:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.724 [2024-07-13 07:06:26.661103] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.724 nvme0n1 00:20:18.724 07:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.982 Running I/O for 1 seconds... 00:20:19.915 00:20:19.915 Latency(us) 00:20:19.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.915 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.915 Verification LBA range: start 0x0 length 0x2000 00:20:19.915 nvme0n1 : 1.03 4102.98 16.03 0.00 0.00 30839.80 7119.59 21567.30 00:20:19.915 =================================================================================================================== 00:20:19.915 Total : 4102.98 16.03 0.00 0.00 30839.80 7119.59 21567.30 00:20:19.915 0 00:20:19.915 07:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:19.915 07:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.915 07:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.172 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.172 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:20.172 "subsystems": [ 00:20:20.172 { 00:20:20.172 "subsystem": "keyring", 00:20:20.172 "config": [ 00:20:20.172 { 00:20:20.172 "method": "keyring_file_add_key", 00:20:20.172 "params": { 00:20:20.172 "name": "key0", 00:20:20.172 "path": "/tmp/tmp.HHNJGClTEI" 00:20:20.172 } 00:20:20.172 } 00:20:20.172 ] 00:20:20.172 }, 00:20:20.172 { 00:20:20.172 "subsystem": "iobuf", 00:20:20.172 "config": [ 00:20:20.172 { 00:20:20.172 "method": "iobuf_set_options", 00:20:20.172 "params": { 00:20:20.172 "large_bufsize": 135168, 00:20:20.172 "large_pool_count": 1024, 00:20:20.172 "small_bufsize": 8192, 00:20:20.172 "small_pool_count": 8192 00:20:20.172 } 00:20:20.172 } 00:20:20.172 ] 00:20:20.172 }, 00:20:20.172 { 00:20:20.172 "subsystem": "sock", 00:20:20.172 "config": [ 00:20:20.172 { 00:20:20.172 "method": "sock_set_default_impl", 00:20:20.172 "params": { 00:20:20.172 "impl_name": "posix" 00:20:20.172 } 00:20:20.172 }, 00:20:20.172 { 00:20:20.172 "method": "sock_impl_set_options", 00:20:20.172 "params": { 00:20:20.172 "enable_ktls": false, 00:20:20.172 "enable_placement_id": 0, 00:20:20.172 "enable_quickack": false, 00:20:20.172 "enable_recv_pipe": true, 00:20:20.172 "enable_zerocopy_send_client": false, 00:20:20.172 "enable_zerocopy_send_server": true, 00:20:20.172 "impl_name": "ssl", 00:20:20.172 "recv_buf_size": 4096, 00:20:20.172 "send_buf_size": 4096, 00:20:20.172 "tls_version": 0, 00:20:20.172 "zerocopy_threshold": 0 00:20:20.172 } 00:20:20.172 }, 00:20:20.172 { 00:20:20.172 "method": "sock_impl_set_options", 00:20:20.172 "params": { 00:20:20.172 "enable_ktls": false, 00:20:20.172 "enable_placement_id": 0, 00:20:20.172 "enable_quickack": false, 00:20:20.172 "enable_recv_pipe": true, 00:20:20.172 "enable_zerocopy_send_client": false, 00:20:20.172 "enable_zerocopy_send_server": true, 00:20:20.172 "impl_name": "posix", 00:20:20.172 "recv_buf_size": 2097152, 00:20:20.172 "send_buf_size": 2097152, 00:20:20.172 "tls_version": 0, 00:20:20.172 "zerocopy_threshold": 0 00:20:20.172 } 00:20:20.172 } 00:20:20.172 ] 00:20:20.172 }, 00:20:20.172 { 00:20:20.172 "subsystem": "vmd", 00:20:20.172 "config": [] 00:20:20.172 }, 00:20:20.172 { 00:20:20.172 "subsystem": "accel", 00:20:20.172 "config": [ 00:20:20.172 { 00:20:20.172 "method": "accel_set_options", 00:20:20.172 "params": { 00:20:20.172 "buf_count": 2048, 00:20:20.172 "large_cache_size": 16, 00:20:20.172 "sequence_count": 2048, 00:20:20.172 "small_cache_size": 128, 00:20:20.172 "task_count": 2048 00:20:20.172 } 00:20:20.173 } 00:20:20.173 ] 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "subsystem": "bdev", 00:20:20.173 "config": [ 00:20:20.173 { 00:20:20.173 "method": "bdev_set_options", 00:20:20.173 "params": { 00:20:20.173 "bdev_auto_examine": true, 00:20:20.173 "bdev_io_cache_size": 256, 00:20:20.173 "bdev_io_pool_size": 65535, 00:20:20.173 "iobuf_large_cache_size": 16, 00:20:20.173 "iobuf_small_cache_size": 128 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "bdev_raid_set_options", 00:20:20.173 "params": { 00:20:20.173 "process_window_size_kb": 1024 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "bdev_iscsi_set_options", 00:20:20.173 "params": { 00:20:20.173 "timeout_sec": 30 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "bdev_nvme_set_options", 00:20:20.173 "params": { 00:20:20.173 "action_on_timeout": "none", 00:20:20.173 "allow_accel_sequence": false, 00:20:20.173 "arbitration_burst": 0, 00:20:20.173 "bdev_retry_count": 3, 00:20:20.173 "ctrlr_loss_timeout_sec": 0, 00:20:20.173 "delay_cmd_submit": true, 00:20:20.173 "dhchap_dhgroups": [ 00:20:20.173 "null", 00:20:20.173 "ffdhe2048", 00:20:20.173 "ffdhe3072", 00:20:20.173 "ffdhe4096", 00:20:20.173 "ffdhe6144", 00:20:20.173 "ffdhe8192" 00:20:20.173 ], 00:20:20.173 "dhchap_digests": [ 00:20:20.173 "sha256", 00:20:20.173 "sha384", 00:20:20.173 "sha512" 00:20:20.173 ], 00:20:20.173 "disable_auto_failback": false, 00:20:20.173 "fast_io_fail_timeout_sec": 0, 00:20:20.173 "generate_uuids": false, 00:20:20.173 "high_priority_weight": 0, 00:20:20.173 "io_path_stat": false, 00:20:20.173 "io_queue_requests": 0, 00:20:20.173 "keep_alive_timeout_ms": 10000, 00:20:20.173 "low_priority_weight": 0, 00:20:20.173 "medium_priority_weight": 0, 00:20:20.173 "nvme_adminq_poll_period_us": 10000, 00:20:20.173 "nvme_error_stat": false, 00:20:20.173 "nvme_ioq_poll_period_us": 0, 00:20:20.173 "rdma_cm_event_timeout_ms": 0, 00:20:20.173 "rdma_max_cq_size": 0, 00:20:20.173 "rdma_srq_size": 0, 00:20:20.173 "reconnect_delay_sec": 0, 00:20:20.173 "timeout_admin_us": 0, 00:20:20.173 "timeout_us": 0, 00:20:20.173 "transport_ack_timeout": 0, 00:20:20.173 "transport_retry_count": 4, 00:20:20.173 "transport_tos": 0 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "bdev_nvme_set_hotplug", 00:20:20.173 "params": { 00:20:20.173 "enable": false, 00:20:20.173 "period_us": 100000 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "bdev_malloc_create", 00:20:20.173 "params": { 00:20:20.173 "block_size": 4096, 00:20:20.173 "name": "malloc0", 00:20:20.173 "num_blocks": 8192, 00:20:20.173 "optimal_io_boundary": 0, 00:20:20.173 "physical_block_size": 4096, 00:20:20.173 "uuid": "9f61fe30-a940-4143-8176-b8ba2df1698a" 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "bdev_wait_for_examine" 00:20:20.173 } 00:20:20.173 ] 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "subsystem": "nbd", 00:20:20.173 "config": [] 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "subsystem": "scheduler", 00:20:20.173 "config": [ 00:20:20.173 { 00:20:20.173 "method": "framework_set_scheduler", 00:20:20.173 "params": { 00:20:20.173 "name": "static" 00:20:20.173 } 00:20:20.173 } 00:20:20.173 ] 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "subsystem": "nvmf", 00:20:20.173 "config": [ 00:20:20.173 { 00:20:20.173 "method": "nvmf_set_config", 00:20:20.173 "params": { 00:20:20.173 "admin_cmd_passthru": { 00:20:20.173 "identify_ctrlr": false 00:20:20.173 }, 00:20:20.173 "discovery_filter": "match_any" 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_set_max_subsystems", 00:20:20.173 "params": { 00:20:20.173 "max_subsystems": 1024 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_set_crdt", 00:20:20.173 "params": { 00:20:20.173 "crdt1": 0, 00:20:20.173 "crdt2": 0, 00:20:20.173 "crdt3": 0 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_create_transport", 00:20:20.173 "params": { 00:20:20.173 "abort_timeout_sec": 1, 00:20:20.173 "ack_timeout": 0, 00:20:20.173 "buf_cache_size": 4294967295, 00:20:20.173 "c2h_success": false, 00:20:20.173 "data_wr_pool_size": 0, 00:20:20.173 "dif_insert_or_strip": false, 00:20:20.173 "in_capsule_data_size": 4096, 00:20:20.173 "io_unit_size": 131072, 00:20:20.173 "max_aq_depth": 128, 00:20:20.173 "max_io_qpairs_per_ctrlr": 127, 00:20:20.173 "max_io_size": 131072, 00:20:20.173 "max_queue_depth": 128, 00:20:20.173 "num_shared_buffers": 511, 00:20:20.173 "sock_priority": 0, 00:20:20.173 "trtype": "TCP", 00:20:20.173 "zcopy": false 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_create_subsystem", 00:20:20.173 "params": { 00:20:20.173 "allow_any_host": false, 00:20:20.173 "ana_reporting": false, 00:20:20.173 "max_cntlid": 65519, 00:20:20.173 "max_namespaces": 32, 00:20:20.173 "min_cntlid": 1, 00:20:20.173 "model_number": "SPDK bdev Controller", 00:20:20.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.173 "serial_number": "00000000000000000000" 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_subsystem_add_host", 00:20:20.173 "params": { 00:20:20.173 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.173 "psk": "key0" 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_subsystem_add_ns", 00:20:20.173 "params": { 00:20:20.173 "namespace": { 00:20:20.173 "bdev_name": "malloc0", 00:20:20.173 "nguid": "9F61FE30A94041438176B8BA2DF1698A", 00:20:20.173 "no_auto_visible": false, 00:20:20.173 "nsid": 1, 00:20:20.173 "uuid": "9f61fe30-a940-4143-8176-b8ba2df1698a" 00:20:20.173 }, 00:20:20.173 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:20.173 } 00:20:20.173 }, 00:20:20.173 { 00:20:20.173 "method": "nvmf_subsystem_add_listener", 00:20:20.173 "params": { 00:20:20.173 "listen_address": { 00:20:20.173 "adrfam": "IPv4", 00:20:20.173 "traddr": "10.0.0.2", 00:20:20.173 "trsvcid": "4420", 00:20:20.173 "trtype": "TCP" 00:20:20.173 }, 00:20:20.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.173 "secure_channel": true 00:20:20.173 } 00:20:20.173 } 00:20:20.173 ] 00:20:20.173 } 00:20:20.173 ] 00:20:20.173 }' 00:20:20.173 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:20.431 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:20.431 "subsystems": [ 00:20:20.431 { 00:20:20.431 "subsystem": "keyring", 00:20:20.431 "config": [ 00:20:20.431 { 00:20:20.431 "method": "keyring_file_add_key", 00:20:20.431 "params": { 00:20:20.431 "name": "key0", 00:20:20.431 "path": "/tmp/tmp.HHNJGClTEI" 00:20:20.431 } 00:20:20.431 } 00:20:20.431 ] 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "subsystem": "iobuf", 00:20:20.431 "config": [ 00:20:20.431 { 00:20:20.431 "method": "iobuf_set_options", 00:20:20.431 "params": { 00:20:20.431 "large_bufsize": 135168, 00:20:20.431 "large_pool_count": 1024, 00:20:20.431 "small_bufsize": 8192, 00:20:20.431 "small_pool_count": 8192 00:20:20.431 } 00:20:20.431 } 00:20:20.431 ] 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "subsystem": "sock", 00:20:20.431 "config": [ 00:20:20.431 { 00:20:20.431 "method": "sock_set_default_impl", 00:20:20.431 "params": { 00:20:20.431 "impl_name": "posix" 00:20:20.431 } 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "method": "sock_impl_set_options", 00:20:20.431 "params": { 00:20:20.431 "enable_ktls": false, 00:20:20.431 "enable_placement_id": 0, 00:20:20.431 "enable_quickack": false, 00:20:20.431 "enable_recv_pipe": true, 00:20:20.431 "enable_zerocopy_send_client": false, 00:20:20.431 "enable_zerocopy_send_server": true, 00:20:20.431 "impl_name": "ssl", 00:20:20.431 "recv_buf_size": 4096, 00:20:20.431 "send_buf_size": 4096, 00:20:20.431 "tls_version": 0, 00:20:20.431 "zerocopy_threshold": 0 00:20:20.431 } 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "method": "sock_impl_set_options", 00:20:20.431 "params": { 00:20:20.431 "enable_ktls": false, 00:20:20.431 "enable_placement_id": 0, 00:20:20.431 "enable_quickack": false, 00:20:20.431 "enable_recv_pipe": true, 00:20:20.431 "enable_zerocopy_send_client": false, 00:20:20.431 "enable_zerocopy_send_server": true, 00:20:20.431 "impl_name": "posix", 00:20:20.431 "recv_buf_size": 2097152, 00:20:20.431 "send_buf_size": 2097152, 00:20:20.431 "tls_version": 0, 00:20:20.431 "zerocopy_threshold": 0 00:20:20.431 } 00:20:20.431 } 00:20:20.431 ] 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "subsystem": "vmd", 00:20:20.431 "config": [] 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "subsystem": "accel", 00:20:20.431 "config": [ 00:20:20.431 { 00:20:20.431 "method": "accel_set_options", 00:20:20.431 "params": { 00:20:20.431 "buf_count": 2048, 00:20:20.431 "large_cache_size": 16, 00:20:20.431 "sequence_count": 2048, 00:20:20.431 "small_cache_size": 128, 00:20:20.431 "task_count": 2048 00:20:20.431 } 00:20:20.431 } 00:20:20.431 ] 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "subsystem": "bdev", 00:20:20.431 "config": [ 00:20:20.431 { 00:20:20.431 "method": "bdev_set_options", 00:20:20.431 "params": { 00:20:20.431 "bdev_auto_examine": true, 00:20:20.431 "bdev_io_cache_size": 256, 00:20:20.431 "bdev_io_pool_size": 65535, 00:20:20.431 "iobuf_large_cache_size": 16, 00:20:20.431 "iobuf_small_cache_size": 128 00:20:20.431 } 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "method": "bdev_raid_set_options", 00:20:20.431 "params": { 00:20:20.431 "process_window_size_kb": 1024 00:20:20.431 } 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "method": "bdev_iscsi_set_options", 00:20:20.431 "params": { 00:20:20.431 "timeout_sec": 30 00:20:20.431 } 00:20:20.431 }, 00:20:20.431 { 00:20:20.431 "method": "bdev_nvme_set_options", 00:20:20.431 "params": { 00:20:20.431 "action_on_timeout": "none", 00:20:20.431 "allow_accel_sequence": false, 00:20:20.431 "arbitration_burst": 0, 00:20:20.431 "bdev_retry_count": 3, 00:20:20.431 "ctrlr_loss_timeout_sec": 0, 00:20:20.431 "delay_cmd_submit": true, 00:20:20.432 "dhchap_dhgroups": [ 00:20:20.432 "null", 00:20:20.432 "ffdhe2048", 00:20:20.432 "ffdhe3072", 00:20:20.432 "ffdhe4096", 00:20:20.432 "ffdhe6144", 00:20:20.432 "ffdhe8192" 00:20:20.432 ], 00:20:20.432 "dhchap_digests": [ 00:20:20.432 "sha256", 00:20:20.432 "sha384", 00:20:20.432 "sha512" 00:20:20.432 ], 00:20:20.432 "disable_auto_failback": false, 00:20:20.432 "fast_io_fail_timeout_sec": 0, 00:20:20.432 "generate_uuids": false, 00:20:20.432 "high_priority_weight": 0, 00:20:20.432 "io_path_stat": false, 00:20:20.432 "io_queue_requests": 512, 00:20:20.432 "keep_alive_timeout_ms": 10000, 00:20:20.432 "low_priority_weight": 0, 00:20:20.432 "medium_priority_weight": 0, 00:20:20.432 "nvme_adminq_poll_period_us": 10000, 00:20:20.432 "nvme_error_stat": false, 00:20:20.432 "nvme_ioq_poll_period_us": 0, 00:20:20.432 "rdma_cm_event_timeout_ms": 0, 00:20:20.432 "rdma_max_cq_size": 0, 00:20:20.432 "rdma_srq_size": 0, 00:20:20.432 "reconnect_delay_sec": 0, 00:20:20.432 "timeout_admin_us": 0, 00:20:20.432 "timeout_us": 0, 00:20:20.432 "transport_ack_timeout": 0, 00:20:20.432 "transport_retry_count": 4, 00:20:20.432 "transport_tos": 0 00:20:20.432 } 00:20:20.432 }, 00:20:20.432 { 00:20:20.432 "method": "bdev_nvme_attach_controller", 00:20:20.432 "params": { 00:20:20.432 "adrfam": "IPv4", 00:20:20.432 "ctrlr_loss_timeout_sec": 0, 00:20:20.432 "ddgst": false, 00:20:20.432 "fast_io_fail_timeout_sec": 0, 00:20:20.432 "hdgst": false, 00:20:20.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.432 "name": "nvme0", 00:20:20.432 "prchk_guard": false, 00:20:20.432 "prchk_reftag": false, 00:20:20.432 "psk": "key0", 00:20:20.432 "reconnect_delay_sec": 0, 00:20:20.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.432 "traddr": "10.0.0.2", 00:20:20.432 "trsvcid": "4420", 00:20:20.432 "trtype": "TCP" 00:20:20.432 } 00:20:20.432 }, 00:20:20.432 { 00:20:20.432 "method": "bdev_nvme_set_hotplug", 00:20:20.432 "params": { 00:20:20.432 "enable": false, 00:20:20.432 "period_us": 100000 00:20:20.432 } 00:20:20.432 }, 00:20:20.432 { 00:20:20.432 "method": "bdev_enable_histogram", 00:20:20.432 "params": { 00:20:20.432 "enable": true, 00:20:20.432 "name": "nvme0n1" 00:20:20.432 } 00:20:20.432 }, 00:20:20.432 { 00:20:20.432 "method": "bdev_wait_for_examine" 00:20:20.432 } 00:20:20.432 ] 00:20:20.432 }, 00:20:20.432 { 00:20:20.432 "subsystem": "nbd", 00:20:20.432 "config": [] 00:20:20.432 } 00:20:20.432 ] 00:20:20.432 }' 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 101010 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101010 ']' 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101010 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101010 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101010' 00:20:20.432 killing process with pid 101010 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101010 00:20:20.432 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.432 00:20:20.432 Latency(us) 00:20:20.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.432 =================================================================================================================== 00:20:20.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.432 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101010 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 100960 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 100960 ']' 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 100960 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100960 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.690 killing process with pid 100960 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100960' 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 100960 00:20:20.690 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 100960 00:20:20.958 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:20.958 07:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:20.958 "subsystems": [ 00:20:20.958 { 00:20:20.958 "subsystem": "keyring", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "keyring_file_add_key", 00:20:20.958 "params": { 00:20:20.958 "name": "key0", 00:20:20.958 "path": "/tmp/tmp.HHNJGClTEI" 00:20:20.958 } 00:20:20.958 } 00:20:20.958 ] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "iobuf", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "iobuf_set_options", 00:20:20.958 "params": { 00:20:20.958 "large_bufsize": 135168, 00:20:20.958 "large_pool_count": 1024, 00:20:20.958 "small_bufsize": 8192, 00:20:20.958 "small_pool_count": 8192 00:20:20.958 } 00:20:20.958 } 00:20:20.958 ] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "sock", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "sock_set_default_impl", 00:20:20.958 "params": { 00:20:20.958 "impl_name": "posix" 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "sock_impl_set_options", 00:20:20.958 "params": { 00:20:20.958 "enable_ktls": false, 00:20:20.958 "enable_placement_id": 0, 00:20:20.958 "enable_quickack": false, 00:20:20.958 "enable_recv_pipe": true, 00:20:20.958 "enable_zerocopy_send_client": false, 00:20:20.958 "enable_zerocopy_send_server": true, 00:20:20.958 "impl_name": "ssl", 00:20:20.958 "recv_buf_size": 4096, 00:20:20.958 "send_buf_size": 4096, 00:20:20.958 "tls_version": 0, 00:20:20.958 "zerocopy_threshold": 0 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "sock_impl_set_options", 00:20:20.958 "params": { 00:20:20.958 "enable_ktls": false, 00:20:20.958 "enable_placement_id": 0, 00:20:20.958 "enable_quickack": false, 00:20:20.958 "enable_recv_pipe": true, 00:20:20.958 "enable_zerocopy_send_client": false, 00:20:20.958 "enable_zerocopy_send_server": true, 00:20:20.958 "impl_name": "posix", 00:20:20.958 "recv_buf_size": 2097152, 00:20:20.958 "send_buf_size": 2097152, 00:20:20.958 "tls_version": 0, 00:20:20.958 "zerocopy_threshold": 0 00:20:20.958 } 00:20:20.958 } 00:20:20.958 ] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "vmd", 00:20:20.958 "config": [] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "accel", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "accel_set_options", 00:20:20.958 "params": { 00:20:20.958 "buf_count": 2048, 00:20:20.958 "large_cache_size": 16, 00:20:20.958 "sequence_count": 2048, 00:20:20.958 "small_cache_size": 128, 00:20:20.958 "task_count": 2048 00:20:20.958 } 00:20:20.958 } 00:20:20.958 ] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "bdev", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "bdev_set_options", 00:20:20.958 "params": { 00:20:20.958 "bdev_auto_examine": true, 00:20:20.958 "bdev_io_cache_size": 256, 00:20:20.958 "bdev_io_pool_size": 65535, 00:20:20.958 "iobuf_large_cache_size": 16, 00:20:20.958 "iobuf_small_cache_size": 128 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "bdev_raid_set_options", 00:20:20.958 "params": { 00:20:20.958 "process_window_size_kb": 1024 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "bdev_iscsi_set_options", 00:20:20.958 "params": { 00:20:20.958 "timeout_sec": 30 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "bdev_nvme_set_options", 00:20:20.958 "params": { 00:20:20.958 "action_on_timeout": "none", 00:20:20.958 "allow_accel_sequence": false, 00:20:20.958 "arbitration_burst": 0, 00:20:20.958 "bdev_retry_count": 3, 00:20:20.958 "ctrlr_loss_timeout_sec": 0, 00:20:20.958 "delay_cmd_submit": true, 00:20:20.958 "dhchap_dhgroups": [ 00:20:20.958 "null", 00:20:20.958 "ffdhe2048", 00:20:20.958 "ffdhe3072", 00:20:20.958 "ffdhe4096", 00:20:20.958 "ffdhe6144", 00:20:20.958 "ffdhe8192" 00:20:20.958 ], 00:20:20.958 "dhchap_digests": [ 00:20:20.958 "sha256", 00:20:20.958 "sha384", 00:20:20.958 "sha512" 00:20:20.958 ], 00:20:20.958 "disable_auto_failback": false, 00:20:20.958 "fast_io_fail_timeout_sec": 0, 00:20:20.958 "generate_uuids": false, 00:20:20.958 "high_priority_weight": 0, 00:20:20.958 "io_path_stat": false, 00:20:20.958 "io_queue_requests": 0, 00:20:20.958 "keep_alive_timeout_ms": 10000, 00:20:20.958 "low_priority_weight": 0, 00:20:20.958 "medium_priority_weight": 0, 00:20:20.958 "nvme_adminq_poll_period_us": 10000, 00:20:20.958 "nvme_error_stat": false, 00:20:20.958 "nvme_ioq_poll_period_us": 0, 00:20:20.958 "rdma_cm_event_timeout_ms": 0, 00:20:20.958 "rdma_max_cq_size": 0, 00:20:20.958 "rdma_srq_size": 0, 00:20:20.958 "reconnect_delay_sec": 0, 00:20:20.958 "timeout_admin_us": 0, 00:20:20.958 "timeout_us": 0, 00:20:20.958 "transport_ack_timeout": 0, 00:20:20.958 "transport_retry_count": 4, 00:20:20.958 "transport_tos": 0 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "bdev_nvme_set_hotplug", 00:20:20.958 "params": { 00:20:20.958 "enable": false, 00:20:20.958 "period_us": 100000 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "bdev_malloc_create", 00:20:20.958 "params": { 00:20:20.958 "block_size": 4096, 00:20:20.958 "name": "malloc0", 00:20:20.958 "num_blocks": 8192, 00:20:20.958 "optimal_io_boundary": 0, 00:20:20.958 "physical_block_size": 4096, 00:20:20.958 "uuid": "9f61fe30-a940-4143-8176-b8ba2df1698a" 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "bdev_wait_for_examine" 00:20:20.958 } 00:20:20.958 ] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "nbd", 00:20:20.958 "config": [] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "scheduler", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "framework_set_scheduler", 00:20:20.958 "params": { 00:20:20.958 "name": "static" 00:20:20.958 } 00:20:20.958 } 00:20:20.958 ] 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "subsystem": "nvmf", 00:20:20.958 "config": [ 00:20:20.958 { 00:20:20.958 "method": "nvmf_set_config", 00:20:20.958 "params": { 00:20:20.958 "admin_cmd_passthru": { 00:20:20.958 "identify_ctrlr": false 00:20:20.958 }, 00:20:20.958 "discovery_filter": "match_any" 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "nvmf_set_max_subsystems", 00:20:20.958 "params": { 00:20:20.958 "max_subsystems": 1024 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "nvmf_set_crdt", 00:20:20.958 "params": { 00:20:20.958 "crdt1": 0, 00:20:20.958 "crdt2": 0, 00:20:20.958 "crdt3": 0 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 { 00:20:20.958 "method": "nvmf_create_transport", 00:20:20.958 "params": { 00:20:20.958 "abort_timeout_sec": 1, 00:20:20.958 "ack_timeout": 0, 00:20:20.958 "buf_cache_size": 4294967295, 00:20:20.958 "c2h_success": false, 00:20:20.958 "data_wr_pool_size": 0, 00:20:20.958 "dif_insert_or_strip": false, 00:20:20.958 "in_capsule_data_size": 4096, 00:20:20.958 "io_unit_size": 131072, 00:20:20.958 "max_aq_depth": 128, 00:20:20.959 "max_io_qpairs_per_ctrlr": 127, 00:20:20.959 "max_io_size": 131072, 00:20:20.959 "max_queue_depth": 128, 00:20:20.959 "num_shared_buffers": 511, 00:20:20.959 "sock_priority": 0, 00:20:20.959 "trtype": "TCP", 00:20:20.959 "zcopy": false 00:20:20.959 } 00:20:20.959 }, 00:20:20.959 { 00:20:20.959 "method": "nvmf_create_subsystem", 00:20:20.959 "params": { 00:20:20.959 "allow_any_host": false, 00:20:20.959 "ana_reporting": false, 00:20:20.959 "max_cntlid": 65519, 00:20:20.959 "max_namespaces": 32, 00:20:20.959 "min_cntlid": 1, 00:20:20.959 "model_number": "SPDK bdev Controller", 00:20:20.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.959 "serial_number": "00000000000000000000" 00:20:20.959 } 00:20:20.959 }, 00:20:20.959 { 00:20:20.959 "method": "nvmf_subsystem_add_host", 00:20:20.959 "params": { 00:20:20.959 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.959 "psk": "key0" 00:20:20.959 } 00:20:20.959 }, 00:20:20.959 { 00:20:20.959 "method": "nvmf_subsystem_add_ns", 00:20:20.959 "params": { 00:20:20.959 "namespace": { 00:20:20.959 "bdev_name": "malloc0", 00:20:20.959 "nguid": "9F61FE30A94041438176B8BA2DF1698A", 00:20:20.959 "no_auto_visible": false, 00:20:20.959 "nsid": 1, 00:20:20.959 "uuid": "9f61fe30-a940-4143-8176-b8ba2df1698a" 00:20:20.959 }, 00:20:20.959 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:20.959 } 00:20:20.959 }, 00:20:20.959 { 00:20:20.959 "method": "nvmf_subsystem_add_listener", 00:20:20.959 "params": { 00:20:20.959 "listen_address": { 00:20:20.959 "adrfam": "IPv4", 00:20:20.959 "traddr": "10.0.0.2", 00:20:20.959 "trsvcid": "4420", 00:20:20.959 "trtype": "TCP" 00:20:20.959 }, 00:20:20.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.959 "secure_channel": true 00:20:20.959 } 00:20:20.959 } 00:20:20.959 ] 00:20:20.959 } 00:20:20.959 ] 00:20:20.959 }' 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101101 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101101 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101101 ']' 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.959 07:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.959 [2024-07-13 07:06:28.992170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:20.959 [2024-07-13 07:06:28.992251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.253 [2024-07-13 07:06:29.127539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.254 [2024-07-13 07:06:29.206405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.254 [2024-07-13 07:06:29.206508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.254 [2024-07-13 07:06:29.206520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.254 [2024-07-13 07:06:29.206542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.254 [2024-07-13 07:06:29.206549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.254 [2024-07-13 07:06:29.206657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.527 [2024-07-13 07:06:29.439070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.527 [2024-07-13 07:06:29.471003] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.527 [2024-07-13 07:06:29.471196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.090 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=101145 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 101145 /var/tmp/bdevperf.sock 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 101145 ']' 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:22.091 07:06:29 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:22.091 "subsystems": [ 00:20:22.091 { 00:20:22.091 "subsystem": "keyring", 00:20:22.091 "config": [ 00:20:22.091 { 00:20:22.091 "method": "keyring_file_add_key", 00:20:22.091 "params": { 00:20:22.091 "name": "key0", 00:20:22.091 "path": "/tmp/tmp.HHNJGClTEI" 00:20:22.091 } 00:20:22.091 } 00:20:22.091 ] 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "subsystem": "iobuf", 00:20:22.091 "config": [ 00:20:22.091 { 00:20:22.091 "method": "iobuf_set_options", 00:20:22.091 "params": { 00:20:22.091 "large_bufsize": 135168, 00:20:22.091 "large_pool_count": 1024, 00:20:22.091 "small_bufsize": 8192, 00:20:22.091 "small_pool_count": 8192 00:20:22.091 } 00:20:22.091 } 00:20:22.091 ] 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "subsystem": "sock", 00:20:22.091 "config": [ 00:20:22.091 { 00:20:22.091 "method": "sock_set_default_impl", 00:20:22.091 "params": { 00:20:22.091 "impl_name": "posix" 00:20:22.091 } 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "method": "sock_impl_set_options", 00:20:22.091 "params": { 00:20:22.091 "enable_ktls": false, 00:20:22.091 "enable_placement_id": 0, 00:20:22.091 "enable_quickack": false, 00:20:22.091 "enable_recv_pipe": true, 00:20:22.091 "enable_zerocopy_send_client": false, 00:20:22.091 "enable_zerocopy_send_server": true, 00:20:22.091 "impl_name": "ssl", 00:20:22.091 "recv_buf_size": 4096, 00:20:22.091 "send_buf_size": 4096, 00:20:22.091 "tls_version": 0, 00:20:22.091 "zerocopy_threshold": 0 00:20:22.091 } 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "method": "sock_impl_set_options", 00:20:22.091 "params": { 00:20:22.091 "enable_ktls": false, 00:20:22.091 "enable_placement_id": 0, 00:20:22.091 "enable_quickack": false, 00:20:22.091 "enable_recv_pipe": true, 00:20:22.091 "enable_zerocopy_send_client": false, 00:20:22.091 "enable_zerocopy_send_server": true, 00:20:22.091 "impl_name": "posix", 00:20:22.091 "recv_buf_size": 2097152, 00:20:22.091 "send_buf_size": 2097152, 00:20:22.091 "tls_version": 0, 00:20:22.091 "zerocopy_threshold": 0 00:20:22.091 } 00:20:22.091 } 00:20:22.091 ] 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "subsystem": "vmd", 00:20:22.091 "config": [] 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "subsystem": "accel", 00:20:22.091 "config": [ 00:20:22.091 { 00:20:22.091 "method": "accel_set_options", 00:20:22.091 "params": { 00:20:22.091 "buf_count": 2048, 00:20:22.091 "large_cache_size": 16, 00:20:22.091 "sequence_count": 2048, 00:20:22.091 "small_cache_size": 128, 00:20:22.091 "task_count": 2048 00:20:22.091 } 00:20:22.091 } 00:20:22.091 ] 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "subsystem": "bdev", 00:20:22.091 "config": [ 00:20:22.091 { 00:20:22.091 "method": "bdev_set_options", 00:20:22.091 "params": { 00:20:22.091 "bdev_auto_examine": true, 00:20:22.091 "bdev_io_cache_size": 256, 00:20:22.091 "bdev_io_pool_size": 65535, 00:20:22.091 "iobuf_large_cache_size": 16, 00:20:22.091 "iobuf_small_cache_size": 128 00:20:22.091 } 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "method": "bdev_raid_set_options", 00:20:22.091 "params": { 00:20:22.091 "process_window_size_kb": 1024 00:20:22.091 } 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "method": "bdev_iscsi_set_options", 00:20:22.091 "params": { 00:20:22.091 "timeout_sec": 30 00:20:22.091 } 00:20:22.091 }, 00:20:22.091 { 00:20:22.091 "method": "bdev_nvme_set_options", 00:20:22.091 "params": { 00:20:22.091 "action_on_timeout": "none", 00:20:22.091 "allow_accel_sequence": false, 00:20:22.091 "arbitration_burst": 0, 00:20:22.091 "bdev_retry_count": 3, 00:20:22.091 "ctrlr_loss_timeout_sec": 0, 00:20:22.091 "delay_cmd_submit": true, 00:20:22.091 "dhchap_dhgroups": [ 00:20:22.091 "null", 00:20:22.091 "ffdhe2048", 00:20:22.091 "ffdhe3072", 00:20:22.091 "ffdhe4096", 00:20:22.091 "ffdhe6144", 00:20:22.091 "ffdhe8192" 00:20:22.091 ], 00:20:22.092 "dhchap_digests": [ 00:20:22.092 "sha256", 00:20:22.092 "sha384", 00:20:22.092 "sha512" 00:20:22.092 ], 00:20:22.092 "disable_auto_failback": false, 00:20:22.092 "fast_io_fail_timeout_sec": 0, 00:20:22.092 "generate_uuids": false, 00:20:22.092 "high_priority_weight": 0, 00:20:22.092 "io_path_stat": false, 00:20:22.092 "io_queue_requests": 512, 00:20:22.092 "keep_alive_timeout_ms": 10000, 00:20:22.092 "low_priority_weight": 0, 00:20:22.092 "medium_priority_weight": 0, 00:20:22.092 "nvme_adminq_poll_period_us": 10000, 00:20:22.092 "nvme_error_stat": false, 00:20:22.092 "nvme_ioq_poll_period_us": 0, 00:20:22.092 "rdma_cm_event_timeout_ms": 0, 00:20:22.092 "rdma_max_cq_size": 0, 00:20:22.092 "rdma_srq_size": 0, 00:20:22.092 "reconnect_delay_sec": 0, 00:20:22.092 "timeout_admin_us": 0, 00:20:22.092 "timeout_us": 0, 00:20:22.092 "transport_ack_timeout": 0, 00:20:22.092 "transport_retry_count": 4, 00:20:22.092 "transport_tos": 0 00:20:22.092 } 00:20:22.092 }, 00:20:22.092 { 00:20:22.092 "method": "bdev_nvme_attach_controller", 00:20:22.092 "params": { 00:20:22.092 "adrfam": "IPv4", 00:20:22.092 "ctrlr_loss_timeout_sec": 0, 00:20:22.092 "ddgst": false, 00:20:22.092 "fast_io_fail_timeout_sec": 0, 00:20:22.092 "hdgst": false, 00:20:22.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.092 "name": "nvme0", 00:20:22.092 "prchk_guard": false, 00:20:22.092 "prchk_reftag": false, 00:20:22.092 "psk": "key0", 00:20:22.092 "reconnect_delay_sec": 0, 00:20:22.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.092 "traddr": "10.0.0.2", 00:20:22.092 "trsvcid": "4420", 00:20:22.092 "trtype": "TCP" 00:20:22.092 } 00:20:22.092 }, 00:20:22.092 { 00:20:22.092 "method": "bdev_nvme_set_hotplug", 00:20:22.092 "params": { 00:20:22.092 "enable": false, 00:20:22.092 "period_us": 100000 00:20:22.092 } 00:20:22.092 }, 00:20:22.092 { 00:20:22.092 "method": "bdev_enable_histogram", 00:20:22.092 "params": { 00:20:22.092 "enable": true, 00:20:22.092 "name": "nvme0n1" 00:20:22.092 } 00:20:22.092 }, 00:20:22.092 { 00:20:22.092 "method": "bdev_wait_for_examine" 00:20:22.092 } 00:20:22.092 ] 00:20:22.092 }, 00:20:22.092 { 00:20:22.092 "subsystem": "nbd", 00:20:22.092 "config": [] 00:20:22.092 } 00:20:22.092 ] 00:20:22.092 }' 00:20:22.092 [2024-07-13 07:06:30.038928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:22.092 [2024-07-13 07:06:30.039020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101145 ] 00:20:22.349 [2024-07-13 07:06:30.182278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.349 [2024-07-13 07:06:30.307768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.608 [2024-07-13 07:06:30.505139] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.175 07:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.175 07:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:23.175 07:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:23.175 07:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:23.433 07:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.433 07:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.433 Running I/O for 1 seconds... 00:20:24.364 00:20:24.364 Latency(us) 00:20:24.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.365 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.365 Verification LBA range: start 0x0 length 0x2000 00:20:24.365 nvme0n1 : 1.03 4119.65 16.09 0.00 0.00 30750.35 11319.85 23473.80 00:20:24.365 =================================================================================================================== 00:20:24.365 Total : 4119.65 16.09 0.00 0.00 30750.35 11319.85 23473.80 00:20:24.365 0 00:20:24.622 07:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:24.622 07:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:24.622 07:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:24.623 nvmf_trace.0 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 101145 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101145 ']' 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101145 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101145 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101145' 00:20:24.623 killing process with pid 101145 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101145 00:20:24.623 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.623 00:20:24.623 Latency(us) 00:20:24.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.623 =================================================================================================================== 00:20:24.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.623 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101145 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.881 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.881 rmmod nvme_tcp 00:20:24.881 rmmod nvme_fabrics 00:20:24.881 rmmod nvme_keyring 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 101101 ']' 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 101101 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 101101 ']' 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 101101 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101101 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.140 killing process with pid 101101 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101101' 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 101101 00:20:25.140 07:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 101101 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.140 07:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.398 07:06:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:25.398 07:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BfpHOxcyB0 /tmp/tmp.Nsljy50Gpp /tmp/tmp.HHNJGClTEI 00:20:25.398 00:20:25.398 real 1m25.685s 00:20:25.398 user 2m12.497s 00:20:25.398 sys 0m30.156s 00:20:25.398 07:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.398 07:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.398 ************************************ 00:20:25.398 END TEST nvmf_tls 00:20:25.398 ************************************ 00:20:25.398 07:06:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:25.398 07:06:33 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:25.398 07:06:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:25.398 07:06:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.398 07:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:25.398 ************************************ 00:20:25.398 START TEST nvmf_fips 00:20:25.398 ************************************ 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:25.398 * Looking for test storage... 00:20:25.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:25.398 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:25.399 07:06:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:25.657 Error setting digest 00:20:25.657 006239F8027F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:25.657 006239F8027F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:25.657 Cannot find device "nvmf_tgt_br" 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.657 Cannot find device "nvmf_tgt_br2" 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:25.657 Cannot find device "nvmf_tgt_br" 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:25.657 Cannot find device "nvmf_tgt_br2" 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:25.657 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:25.658 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.658 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:25.658 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:25.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:25.916 00:20:25.916 --- 10.0.0.2 ping statistics --- 00:20:25.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.916 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:25.916 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:25.916 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:25.916 00:20:25.916 --- 10.0.0.3 ping statistics --- 00:20:25.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.916 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:25.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:25.916 00:20:25.916 --- 10.0.0.1 ping statistics --- 00:20:25.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.916 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=101423 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 101423 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101423 ']' 00:20:25.916 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.917 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.917 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.917 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.917 07:06:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.174 [2024-07-13 07:06:34.043984] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:26.174 [2024-07-13 07:06:34.044275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.174 [2024-07-13 07:06:34.188215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.432 [2024-07-13 07:06:34.306896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.432 [2024-07-13 07:06:34.307301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.432 [2024-07-13 07:06:34.307327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.432 [2024-07-13 07:06:34.307339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.432 [2024-07-13 07:06:34.307348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.432 [2024-07-13 07:06:34.307392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:26.999 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.257 [2024-07-13 07:06:35.325307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.515 [2024-07-13 07:06:35.341222] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.515 [2024-07-13 07:06:35.341469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.515 [2024-07-13 07:06:35.375827] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:27.515 malloc0 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=101481 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 101481 /var/tmp/bdevperf.sock 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 101481 ']' 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.515 07:06:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.515 [2024-07-13 07:06:35.488024] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:27.515 [2024-07-13 07:06:35.488123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101481 ] 00:20:27.774 [2024-07-13 07:06:35.631232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.774 [2024-07-13 07:06:35.723741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.729 07:06:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.729 07:06:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:28.729 07:06:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:28.729 [2024-07-13 07:06:36.673249] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.729 [2024-07-13 07:06:36.673355] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:28.729 TLSTESTn1 00:20:28.729 07:06:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.988 Running I/O for 10 seconds... 00:20:38.963 00:20:38.963 Latency(us) 00:20:38.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.963 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.964 Verification LBA range: start 0x0 length 0x2000 00:20:38.964 TLSTESTn1 : 10.02 4180.47 16.33 0.00 0.00 30560.93 6583.39 25261.15 00:20:38.964 =================================================================================================================== 00:20:38.964 Total : 4180.47 16.33 0.00 0.00 30560.93 6583.39 25261.15 00:20:38.964 0 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:38.964 nvmf_trace.0 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 101481 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101481 ']' 00:20:38.964 07:06:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101481 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101481 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:38.964 killing process with pid 101481 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101481' 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101481 00:20:38.964 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.964 00:20:38.964 Latency(us) 00:20:38.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.964 =================================================================================================================== 00:20:38.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.964 [2024-07-13 07:06:47.025103] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:38.964 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101481 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.223 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.223 rmmod nvme_tcp 00:20:39.481 rmmod nvme_fabrics 00:20:39.481 rmmod nvme_keyring 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 101423 ']' 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 101423 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 101423 ']' 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 101423 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101423 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101423' 00:20:39.481 killing process with pid 101423 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 101423 00:20:39.481 [2024-07-13 07:06:47.359014] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:39.481 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 101423 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:39.740 00:20:39.740 real 0m14.390s 00:20:39.740 user 0m18.645s 00:20:39.740 sys 0m6.415s 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.740 ************************************ 00:20:39.740 END TEST nvmf_fips 00:20:39.740 ************************************ 00:20:39.740 07:06:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.740 07:06:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.740 07:06:47 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:20:39.740 07:06:47 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:39.740 07:06:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.740 07:06:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.740 07:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.740 ************************************ 00:20:39.740 START TEST nvmf_fuzz 00:20:39.740 ************************************ 00:20:39.741 07:06:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:39.999 * Looking for test storage... 00:20:39.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:39.999 07:06:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.999 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:39.999 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.999 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:40.000 Cannot find device "nvmf_tgt_br" 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.000 Cannot find device "nvmf_tgt_br2" 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:40.000 Cannot find device "nvmf_tgt_br" 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:40.000 Cannot find device "nvmf_tgt_br2" 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:40.000 07:06:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.000 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:40.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:20:40.259 00:20:40.259 --- 10.0.0.2 ping statistics --- 00:20:40.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.259 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:40.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:20:40.259 00:20:40.259 --- 10.0.0.3 ping statistics --- 00:20:40.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.259 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:40.259 00:20:40.259 --- 10.0.0.1 ping statistics --- 00:20:40.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.259 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=101819 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 101819 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 101819 ']' 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.259 07:06:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:41.217 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.217 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:41.217 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.217 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.217 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 Malloc0 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:41.476 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:41.735 Shutting down the fuzz application 00:20:41.735 07:06:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:42.303 Shutting down the fuzz application 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.303 rmmod nvme_tcp 00:20:42.303 rmmod nvme_fabrics 00:20:42.303 rmmod nvme_keyring 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 101819 ']' 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 101819 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 101819 ']' 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 101819 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 101819 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:42.303 killing process with pid 101819 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 101819' 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 101819 00:20:42.303 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 101819 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:42.561 07:06:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:20:42.561 00:20:42.561 real 0m2.821s 00:20:42.561 user 0m3.031s 00:20:42.561 sys 0m0.664s 00:20:42.562 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.562 07:06:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:42.562 ************************************ 00:20:42.562 END TEST nvmf_fuzz 00:20:42.562 ************************************ 00:20:42.562 07:06:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:42.562 07:06:50 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:42.562 07:06:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:42.562 07:06:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.562 07:06:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.562 ************************************ 00:20:42.562 START TEST nvmf_multiconnection 00:20:42.562 ************************************ 00:20:42.562 07:06:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:42.821 * Looking for test storage... 00:20:42.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:42.821 Cannot find device "nvmf_tgt_br" 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.821 Cannot find device "nvmf_tgt_br2" 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:42.821 Cannot find device "nvmf_tgt_br" 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:42.821 Cannot find device "nvmf_tgt_br2" 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.821 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.081 07:06:50 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:43.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:20:43.081 00:20:43.081 --- 10.0.0.2 ping statistics --- 00:20:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.081 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:43.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:20:43.081 00:20:43.081 --- 10.0.0.3 ping statistics --- 00:20:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.081 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:43.081 00:20:43.081 --- 10.0.0.1 ping statistics --- 00:20:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.081 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=102029 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 102029 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 102029 ']' 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.081 07:06:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.081 [2024-07-13 07:06:51.140330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:43.081 [2024-07-13 07:06:51.140438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.340 [2024-07-13 07:06:51.287947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.340 [2024-07-13 07:06:51.390612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.340 [2024-07-13 07:06:51.390835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.340 [2024-07-13 07:06:51.391005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.340 [2024-07-13 07:06:51.391164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.340 [2024-07-13 07:06:51.391279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.340 [2024-07-13 07:06:51.391605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.340 [2024-07-13 07:06:51.391719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.340 [2024-07-13 07:06:51.392646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.340 [2024-07-13 07:06:51.392664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 [2024-07-13 07:06:52.185061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 Malloc1 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 [2024-07-13 07:06:52.260655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 Malloc2 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:44.275 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.276 Malloc3 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:44.276 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.534 Malloc4 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:44.534 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 Malloc5 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 Malloc6 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 Malloc7 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 Malloc8 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.535 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 Malloc9 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.794 Malloc10 00:20:44.794 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 Malloc11 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.795 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:45.052 07:06:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:45.052 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:45.052 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.053 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:45.053 07:06:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:46.985 07:06:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:47.244 07:06:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:47.244 07:06:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:47.244 07:06:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:47.244 07:06:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:47.244 07:06:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:49.142 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:49.142 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:49.142 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:20:49.142 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:49.143 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:49.143 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:49.143 07:06:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:49.143 07:06:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:49.400 07:06:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:49.400 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:49.400 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:49.400 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:49.400 07:06:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:51.300 07:06:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:20:51.557 07:06:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:51.557 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:51.557 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:51.557 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:51.557 07:06:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:54.086 07:07:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.087 07:07:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:20:54.087 07:07:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:54.087 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:54.087 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:54.087 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:54.087 07:07:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:55.985 07:07:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.882 07:07:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:20:58.140 07:07:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:58.140 07:07:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:58.140 07:07:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:58.140 07:07:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:58.140 07:07:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:00.662 07:07:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:02.560 07:07:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:04.462 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:04.462 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:04.462 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:04.720 07:07:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:07.254 07:07:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:09.152 07:07:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:09.152 [global] 00:21:09.152 thread=1 00:21:09.152 invalidate=1 00:21:09.152 rw=read 00:21:09.152 time_based=1 00:21:09.152 runtime=10 00:21:09.152 ioengine=libaio 00:21:09.152 direct=1 00:21:09.152 bs=262144 00:21:09.152 iodepth=64 00:21:09.152 norandommap=1 00:21:09.152 numjobs=1 00:21:09.152 00:21:09.152 [job0] 00:21:09.152 filename=/dev/nvme0n1 00:21:09.152 [job1] 00:21:09.152 filename=/dev/nvme10n1 00:21:09.152 [job2] 00:21:09.152 filename=/dev/nvme1n1 00:21:09.152 [job3] 00:21:09.152 filename=/dev/nvme2n1 00:21:09.152 [job4] 00:21:09.152 filename=/dev/nvme3n1 00:21:09.152 [job5] 00:21:09.152 filename=/dev/nvme4n1 00:21:09.152 [job6] 00:21:09.152 filename=/dev/nvme5n1 00:21:09.152 [job7] 00:21:09.152 filename=/dev/nvme6n1 00:21:09.152 [job8] 00:21:09.152 filename=/dev/nvme7n1 00:21:09.152 [job9] 00:21:09.152 filename=/dev/nvme8n1 00:21:09.152 [job10] 00:21:09.152 filename=/dev/nvme9n1 00:21:09.152 Could not set queue depth (nvme0n1) 00:21:09.152 Could not set queue depth (nvme10n1) 00:21:09.152 Could not set queue depth (nvme1n1) 00:21:09.152 Could not set queue depth (nvme2n1) 00:21:09.152 Could not set queue depth (nvme3n1) 00:21:09.152 Could not set queue depth (nvme4n1) 00:21:09.152 Could not set queue depth (nvme5n1) 00:21:09.152 Could not set queue depth (nvme6n1) 00:21:09.152 Could not set queue depth (nvme7n1) 00:21:09.152 Could not set queue depth (nvme8n1) 00:21:09.152 Could not set queue depth (nvme9n1) 00:21:09.425 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:09.425 fio-3.35 00:21:09.425 Starting 11 threads 00:21:21.638 00:21:21.638 job0: (groupid=0, jobs=1): err= 0: pid=102506: Sat Jul 13 07:07:27 2024 00:21:21.638 read: IOPS=244, BW=61.2MiB/s (64.2MB/s)(623MiB/10172msec) 00:21:21.638 slat (usec): min=21, max=192572, avg=4011.43, stdev=19068.16 00:21:21.638 clat (msec): min=161, max=427, avg=256.66, stdev=23.08 00:21:21.638 lat (msec): min=194, max=446, avg=260.67, stdev=29.71 00:21:21.638 clat percentiles (msec): 00:21:21.638 | 1.00th=[ 199], 5.00th=[ 218], 10.00th=[ 234], 20.00th=[ 243], 00:21:21.638 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 262], 00:21:21.638 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 284], 00:21:21.638 | 99.00th=[ 347], 99.50th=[ 368], 99.90th=[ 422], 99.95th=[ 422], 00:21:21.638 | 99.99th=[ 426] 00:21:21.638 bw ( KiB/s): min=48640, max=72558, per=4.82%, avg=62182.10, stdev=5595.10, samples=20 00:21:21.638 iops : min= 190, max= 283, avg=242.75, stdev=21.74, samples=20 00:21:21.638 lat (msec) : 250=34.19%, 500=65.81% 00:21:21.638 cpu : usr=0.08%, sys=1.04%, ctx=436, majf=0, minf=4097 00:21:21.638 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:21.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.638 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.638 job1: (groupid=0, jobs=1): err= 0: pid=102507: Sat Jul 13 07:07:27 2024 00:21:21.638 read: IOPS=551, BW=138MiB/s (145MB/s)(1391MiB/10082msec) 00:21:21.638 slat (usec): min=22, max=63967, avg=1793.60, stdev=6574.44 00:21:21.638 clat (msec): min=34, max=193, avg=113.95, stdev=16.05 00:21:21.638 lat (msec): min=36, max=194, avg=115.74, stdev=17.07 00:21:21.638 clat percentiles (msec): 00:21:21.638 | 1.00th=[ 78], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 102], 00:21:21.638 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 118], 00:21:21.638 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 132], 95.00th=[ 138], 00:21:21.638 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 194], 99.95th=[ 194], 00:21:21.638 | 99.99th=[ 194] 00:21:21.638 bw ( KiB/s): min=126204, max=158208, per=10.92%, avg=140772.65, stdev=9131.84, samples=20 00:21:21.638 iops : min= 492, max= 618, avg=549.75, stdev=35.66, samples=20 00:21:21.638 lat (msec) : 50=0.40%, 100=15.48%, 250=84.12% 00:21:21.638 cpu : usr=0.28%, sys=2.00%, ctx=913, majf=0, minf=4097 00:21:21.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:21.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.638 issued rwts: total=5562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.638 job2: (groupid=0, jobs=1): err= 0: pid=102508: Sat Jul 13 07:07:27 2024 00:21:21.638 read: IOPS=247, BW=61.8MiB/s (64.8MB/s)(629MiB/10181msec) 00:21:21.638 slat (usec): min=22, max=238277, avg=3939.43, stdev=22936.10 00:21:21.638 clat (msec): min=20, max=458, avg=254.70, stdev=44.28 00:21:21.638 lat (msec): min=22, max=509, avg=258.63, stdev=49.77 00:21:21.638 clat percentiles (msec): 00:21:21.638 | 1.00th=[ 78], 5.00th=[ 222], 10.00th=[ 232], 20.00th=[ 239], 00:21:21.638 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:21:21.638 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 292], 00:21:21.638 | 99.00th=[ 447], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:21:21.638 | 99.99th=[ 460] 00:21:21.638 bw ( KiB/s): min=32256, max=76288, per=4.86%, avg=62739.20, stdev=8695.30, samples=20 00:21:21.638 iops : min= 126, max= 298, avg=244.95, stdev=33.95, samples=20 00:21:21.638 lat (msec) : 50=0.08%, 100=2.47%, 250=36.98%, 500=60.48% 00:21:21.638 cpu : usr=0.08%, sys=0.98%, ctx=365, majf=0, minf=4097 00:21:21.638 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:21.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.638 issued rwts: total=2515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.638 job3: (groupid=0, jobs=1): err= 0: pid=102510: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=1622, BW=406MiB/s (425MB/s)(4066MiB/10021msec) 00:21:21.639 slat (usec): min=20, max=248009, avg=597.22, stdev=3755.92 00:21:21.639 clat (msec): min=3, max=479, avg=38.78, stdev=31.99 00:21:21.639 lat (msec): min=3, max=479, avg=39.38, stdev=32.49 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 31], 00:21:21.639 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 35], 00:21:21.639 | 70.00th=[ 39], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 50], 00:21:21.639 | 99.00th=[ 251], 99.50th=[ 279], 99.90th=[ 439], 99.95th=[ 443], 00:21:21.639 | 99.99th=[ 481] 00:21:21.639 bw ( KiB/s): min=31232, max=477696, per=32.16%, avg=414775.95, stdev=121232.35, samples=20 00:21:21.639 iops : min= 122, max= 1866, avg=1620.10, stdev=473.56, samples=20 00:21:21.639 lat (msec) : 4=0.01%, 10=0.10%, 20=0.89%, 50=94.81%, 100=2.90% 00:21:21.639 lat (msec) : 250=0.19%, 500=1.10% 00:21:21.639 cpu : usr=0.58%, sys=4.43%, ctx=2047, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=16263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job4: (groupid=0, jobs=1): err= 0: pid=102511: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=245, BW=61.4MiB/s (64.4MB/s)(626MiB/10181msec) 00:21:21.639 slat (usec): min=22, max=167269, avg=4006.33, stdev=16047.76 00:21:21.639 clat (msec): min=23, max=406, avg=255.98, stdev=35.82 00:21:21.639 lat (msec): min=24, max=406, avg=259.99, stdev=39.17 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 101], 5.00th=[ 220], 10.00th=[ 236], 20.00th=[ 245], 00:21:21.639 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 262], 00:21:21.639 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 292], 00:21:21.639 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 405], 99.95th=[ 405], 00:21:21.639 | 99.99th=[ 405] 00:21:21.639 bw ( KiB/s): min=51712, max=70144, per=4.84%, avg=62406.75, stdev=4618.61, samples=20 00:21:21.639 iops : min= 202, max= 274, avg=243.65, stdev=18.07, samples=20 00:21:21.639 lat (msec) : 50=0.32%, 100=0.96%, 250=28.30%, 500=70.42% 00:21:21.639 cpu : usr=0.12%, sys=0.95%, ctx=521, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job5: (groupid=0, jobs=1): err= 0: pid=102512: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=242, BW=60.7MiB/s (63.6MB/s)(618MiB/10180msec) 00:21:21.639 slat (usec): min=21, max=173016, avg=4068.95, stdev=16047.08 00:21:21.639 clat (msec): min=26, max=516, avg=259.13, stdev=32.03 00:21:21.639 lat (msec): min=27, max=516, avg=263.20, stdev=35.60 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 134], 5.00th=[ 226], 10.00th=[ 234], 20.00th=[ 247], 00:21:21.639 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 262], 60.00th=[ 266], 00:21:21.639 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 292], 00:21:21.639 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 439], 99.95th=[ 439], 00:21:21.639 | 99.99th=[ 518] 00:21:21.639 bw ( KiB/s): min=51712, max=72192, per=4.78%, avg=61613.20, stdev=4883.18, samples=20 00:21:21.639 iops : min= 202, max= 282, avg=240.55, stdev=19.02, samples=20 00:21:21.639 lat (msec) : 50=0.08%, 100=0.53%, 250=24.24%, 500=75.11%, 750=0.04% 00:21:21.639 cpu : usr=0.08%, sys=0.84%, ctx=589, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=2471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job6: (groupid=0, jobs=1): err= 0: pid=102513: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=297, BW=74.3MiB/s (77.9MB/s)(756MiB/10181msec) 00:21:21.639 slat (usec): min=17, max=198724, avg=3289.76, stdev=15931.69 00:21:21.639 clat (msec): min=23, max=435, avg=211.76, stdev=79.60 00:21:21.639 lat (msec): min=24, max=459, avg=215.05, stdev=82.13 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 64], 5.00th=[ 77], 10.00th=[ 86], 20.00th=[ 105], 00:21:21.639 | 30.00th=[ 209], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 259], 00:21:21.639 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 288], 00:21:21.639 | 99.00th=[ 317], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 430], 00:21:21.639 | 99.99th=[ 435] 00:21:21.639 bw ( KiB/s): min=54272, max=175104, per=5.88%, avg=75768.60, stdev=35375.31, samples=20 00:21:21.639 iops : min= 212, max= 684, avg=295.85, stdev=138.23, samples=20 00:21:21.639 lat (msec) : 50=0.30%, 100=17.06%, 250=30.69%, 500=51.95% 00:21:21.639 cpu : usr=0.12%, sys=1.13%, ctx=544, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=3024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job7: (groupid=0, jobs=1): err= 0: pid=102514: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=239, BW=60.0MiB/s (62.9MB/s)(611MiB/10181msec) 00:21:21.639 slat (usec): min=23, max=165825, avg=4096.33, stdev=16052.21 00:21:21.639 clat (msec): min=27, max=435, avg=262.07, stdev=37.33 00:21:21.639 lat (msec): min=29, max=435, avg=266.17, stdev=40.77 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 82], 5.00th=[ 230], 10.00th=[ 241], 20.00th=[ 253], 00:21:21.639 | 30.00th=[ 257], 40.00th=[ 259], 50.00th=[ 264], 60.00th=[ 271], 00:21:21.639 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 292], 00:21:21.639 | 99.00th=[ 372], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:21:21.639 | 99.99th=[ 435] 00:21:21.639 bw ( KiB/s): min=48128, max=68608, per=4.72%, avg=60894.55, stdev=4980.14, samples=20 00:21:21.639 iops : min= 188, max= 268, avg=237.75, stdev=19.44, samples=20 00:21:21.639 lat (msec) : 50=0.12%, 100=2.37%, 250=15.10%, 500=82.40% 00:21:21.639 cpu : usr=0.12%, sys=1.07%, ctx=311, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=2443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job8: (groupid=0, jobs=1): err= 0: pid=102515: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=236, BW=59.2MiB/s (62.0MB/s)(602MiB/10178msec) 00:21:21.639 slat (usec): min=23, max=161834, avg=4152.06, stdev=14140.90 00:21:21.639 clat (msec): min=75, max=425, avg=265.80, stdev=28.21 00:21:21.639 lat (msec): min=76, max=426, avg=269.95, stdev=31.72 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 159], 5.00th=[ 228], 10.00th=[ 241], 20.00th=[ 251], 00:21:21.639 | 30.00th=[ 257], 40.00th=[ 262], 50.00th=[ 271], 60.00th=[ 275], 00:21:21.639 | 70.00th=[ 279], 80.00th=[ 284], 90.00th=[ 292], 95.00th=[ 296], 00:21:21.639 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 409], 99.95th=[ 426], 00:21:21.639 | 99.99th=[ 426] 00:21:21.639 bw ( KiB/s): min=47616, max=65024, per=4.65%, avg=60026.00, stdev=4413.58, samples=20 00:21:21.639 iops : min= 186, max= 254, avg=234.40, stdev=17.22, samples=20 00:21:21.639 lat (msec) : 100=0.21%, 250=19.43%, 500=80.37% 00:21:21.639 cpu : usr=0.08%, sys=0.95%, ctx=494, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=2409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job9: (groupid=0, jobs=1): err= 0: pid=102516: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=569, BW=142MiB/s (149MB/s)(1436MiB/10079msec) 00:21:21.639 slat (usec): min=16, max=221275, avg=1693.86, stdev=8179.93 00:21:21.639 clat (msec): min=5, max=458, avg=110.39, stdev=44.91 00:21:21.639 lat (msec): min=5, max=495, avg=112.08, stdev=46.13 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 86], 20.00th=[ 94], 00:21:21.639 | 30.00th=[ 100], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:21:21.639 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 128], 95.00th=[ 236], 00:21:21.639 | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 422], 00:21:21.639 | 99.99th=[ 460] 00:21:21.639 bw ( KiB/s): min=64641, max=191871, per=11.28%, avg=145422.75, stdev=29052.11, samples=20 00:21:21.639 iops : min= 252, max= 749, avg=567.95, stdev=113.50, samples=20 00:21:21.639 lat (msec) : 10=0.63%, 20=4.75%, 50=0.21%, 100=25.81%, 250=65.45% 00:21:21.639 lat (msec) : 500=3.15% 00:21:21.639 cpu : usr=0.23%, sys=2.05%, ctx=1031, majf=0, minf=4097 00:21:21.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:21.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.639 issued rwts: total=5743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.639 job10: (groupid=0, jobs=1): err= 0: pid=102517: Sat Jul 13 07:07:27 2024 00:21:21.639 read: IOPS=581, BW=145MiB/s (152MB/s)(1466MiB/10087msec) 00:21:21.639 slat (usec): min=22, max=61149, avg=1701.93, stdev=6105.05 00:21:21.639 clat (msec): min=30, max=191, avg=108.16, stdev=16.37 00:21:21.639 lat (msec): min=31, max=191, avg=109.86, stdev=17.12 00:21:21.639 clat percentiles (msec): 00:21:21.639 | 1.00th=[ 47], 5.00th=[ 84], 10.00th=[ 91], 20.00th=[ 97], 00:21:21.639 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 112], 00:21:21.639 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 133], 00:21:21.639 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 192], 00:21:21.640 | 99.99th=[ 192] 00:21:21.640 bw ( KiB/s): min=132608, max=171520, per=11.51%, avg=148457.45, stdev=10413.33, samples=20 00:21:21.640 iops : min= 518, max= 670, avg=579.75, stdev=40.67, samples=20 00:21:21.640 lat (msec) : 50=1.02%, 100=25.31%, 250=73.67% 00:21:21.640 cpu : usr=0.17%, sys=2.35%, ctx=1076, majf=0, minf=4097 00:21:21.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:21:21.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:21.640 issued rwts: total=5863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.640 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.640 00:21:21.640 Run status group 0 (all jobs): 00:21:21.640 READ: bw=1259MiB/s (1321MB/s), 59.2MiB/s-406MiB/s (62.0MB/s-425MB/s), io=12.5GiB (13.4GB), run=10021-10181msec 00:21:21.640 00:21:21.640 Disk stats (read/write): 00:21:21.640 nvme0n1: ios=4856/0, merge=0/0, ticks=1234743/0, in_queue=1234743, util=97.51% 00:21:21.640 nvme10n1: ios=10996/0, merge=0/0, ticks=1237559/0, in_queue=1237559, util=97.74% 00:21:21.640 nvme1n1: ios=4904/0, merge=0/0, ticks=1219308/0, in_queue=1219308, util=98.09% 00:21:21.640 nvme2n1: ios=32399/0, merge=0/0, ticks=1224753/0, in_queue=1224753, util=98.02% 00:21:21.640 nvme3n1: ios=4877/0, merge=0/0, ticks=1232125/0, in_queue=1232125, util=98.21% 00:21:21.640 nvme4n1: ios=4819/0, merge=0/0, ticks=1232649/0, in_queue=1232649, util=98.18% 00:21:21.640 nvme5n1: ios=5924/0, merge=0/0, ticks=1236956/0, in_queue=1236956, util=98.57% 00:21:21.640 nvme6n1: ios=4758/0, merge=0/0, ticks=1235494/0, in_queue=1235494, util=98.61% 00:21:21.640 nvme7n1: ios=4691/0, merge=0/0, ticks=1236947/0, in_queue=1236947, util=98.87% 00:21:21.640 nvme8n1: ios=11360/0, merge=0/0, ticks=1241368/0, in_queue=1241368, util=98.87% 00:21:21.640 nvme9n1: ios=11614/0, merge=0/0, ticks=1238308/0, in_queue=1238308, util=98.90% 00:21:21.640 07:07:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:21.640 [global] 00:21:21.640 thread=1 00:21:21.640 invalidate=1 00:21:21.640 rw=randwrite 00:21:21.640 time_based=1 00:21:21.640 runtime=10 00:21:21.640 ioengine=libaio 00:21:21.640 direct=1 00:21:21.640 bs=262144 00:21:21.640 iodepth=64 00:21:21.640 norandommap=1 00:21:21.640 numjobs=1 00:21:21.640 00:21:21.640 [job0] 00:21:21.640 filename=/dev/nvme0n1 00:21:21.640 [job1] 00:21:21.640 filename=/dev/nvme10n1 00:21:21.640 [job2] 00:21:21.640 filename=/dev/nvme1n1 00:21:21.640 [job3] 00:21:21.640 filename=/dev/nvme2n1 00:21:21.640 [job4] 00:21:21.640 filename=/dev/nvme3n1 00:21:21.640 [job5] 00:21:21.640 filename=/dev/nvme4n1 00:21:21.640 [job6] 00:21:21.640 filename=/dev/nvme5n1 00:21:21.640 [job7] 00:21:21.640 filename=/dev/nvme6n1 00:21:21.640 [job8] 00:21:21.640 filename=/dev/nvme7n1 00:21:21.640 [job9] 00:21:21.640 filename=/dev/nvme8n1 00:21:21.640 [job10] 00:21:21.640 filename=/dev/nvme9n1 00:21:21.640 Could not set queue depth (nvme0n1) 00:21:21.640 Could not set queue depth (nvme10n1) 00:21:21.640 Could not set queue depth (nvme1n1) 00:21:21.640 Could not set queue depth (nvme2n1) 00:21:21.640 Could not set queue depth (nvme3n1) 00:21:21.640 Could not set queue depth (nvme4n1) 00:21:21.640 Could not set queue depth (nvme5n1) 00:21:21.640 Could not set queue depth (nvme6n1) 00:21:21.640 Could not set queue depth (nvme7n1) 00:21:21.640 Could not set queue depth (nvme8n1) 00:21:21.640 Could not set queue depth (nvme9n1) 00:21:21.640 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:21.640 fio-3.35 00:21:21.640 Starting 11 threads 00:21:31.614 00:21:31.614 job0: (groupid=0, jobs=1): err= 0: pid=102713: Sat Jul 13 07:07:38 2024 00:21:31.614 write: IOPS=633, BW=158MiB/s (166MB/s)(1598MiB/10090msec); 0 zone resets 00:21:31.614 slat (usec): min=22, max=9399, avg=1558.85, stdev=2619.16 00:21:31.614 clat (msec): min=13, max=194, avg=99.41, stdev= 8.36 00:21:31.614 lat (msec): min=13, max=194, avg=100.97, stdev= 8.10 00:21:31.614 clat percentiles (msec): 00:21:31.614 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 96], 00:21:31.614 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 101], 00:21:31.614 | 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 108], 00:21:31.614 | 99.00th=[ 110], 99.50th=[ 142], 99.90th=[ 182], 99.95th=[ 188], 00:21:31.614 | 99.99th=[ 194] 00:21:31.614 bw ( KiB/s): min=150227, max=168960, per=12.76%, avg=161985.20, stdev=5474.04, samples=20 00:21:31.614 iops : min= 586, max= 660, avg=632.60, stdev=21.55, samples=20 00:21:31.614 lat (msec) : 20=0.06%, 50=0.31%, 100=57.89%, 250=41.73% 00:21:31.614 cpu : usr=1.99%, sys=1.75%, ctx=8321, majf=0, minf=1 00:21:31.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:31.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.614 issued rwts: total=0,6393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.614 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.614 job1: (groupid=0, jobs=1): err= 0: pid=102714: Sat Jul 13 07:07:38 2024 00:21:31.614 write: IOPS=437, BW=109MiB/s (115MB/s)(1109MiB/10131msec); 0 zone resets 00:21:31.614 slat (usec): min=26, max=11015, avg=2247.46, stdev=3832.47 00:21:31.614 clat (msec): min=16, max=284, avg=143.78, stdev=18.56 00:21:31.614 lat (msec): min=16, max=284, avg=146.03, stdev=18.46 00:21:31.614 clat percentiles (msec): 00:21:31.614 | 1.00th=[ 72], 5.00th=[ 103], 10.00th=[ 136], 20.00th=[ 140], 00:21:31.614 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:21:31.614 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 159], 00:21:31.614 | 99.00th=[ 178], 99.50th=[ 226], 99.90th=[ 275], 99.95th=[ 275], 00:21:31.614 | 99.99th=[ 284] 00:21:31.614 bw ( KiB/s): min=100662, max=150528, per=8.82%, avg=111952.85, stdev=9902.24, samples=20 00:21:31.614 iops : min= 393, max= 588, avg=437.30, stdev=38.69, samples=20 00:21:31.614 lat (msec) : 20=0.09%, 50=0.50%, 100=3.56%, 250=95.54%, 500=0.32% 00:21:31.614 cpu : usr=1.58%, sys=1.37%, ctx=5492, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,4437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job2: (groupid=0, jobs=1): err= 0: pid=102726: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=1185, BW=296MiB/s (311MB/s)(2978MiB/10049msec); 0 zone resets 00:21:31.615 slat (usec): min=15, max=24053, avg=834.78, stdev=1411.70 00:21:31.615 clat (msec): min=26, max=113, avg=53.14, stdev= 7.37 00:21:31.615 lat (msec): min=26, max=113, avg=53.97, stdev= 7.44 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 50], 00:21:31.615 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 54], 00:21:31.615 | 70.00th=[ 55], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 58], 00:21:31.615 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 108], 00:21:31.615 | 99.99th=[ 114] 00:21:31.615 bw ( KiB/s): min=179712, max=328192, per=23.88%, avg=303235.65, stdev=32285.11, samples=20 00:21:31.615 iops : min= 702, max= 1282, avg=1184.40, stdev=126.10, samples=20 00:21:31.615 lat (msec) : 50=28.03%, 100=71.86%, 250=0.11% 00:21:31.615 cpu : usr=3.21%, sys=2.62%, ctx=14038, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,11913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job3: (groupid=0, jobs=1): err= 0: pid=102727: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=248, BW=62.1MiB/s (65.1MB/s)(634MiB/10213msec); 0 zone resets 00:21:31.615 slat (usec): min=23, max=59596, avg=3940.99, stdev=6985.71 00:21:31.615 clat (msec): min=33, max=455, avg=253.67, stdev=30.07 00:21:31.615 lat (msec): min=33, max=455, avg=257.61, stdev=29.71 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 111], 5.00th=[ 230], 10.00th=[ 234], 20.00th=[ 243], 00:21:31.615 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:21:31.615 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 284], 00:21:31.615 | 99.00th=[ 363], 99.50th=[ 409], 99.90th=[ 439], 99.95th=[ 456], 00:21:31.615 | 99.99th=[ 456] 00:21:31.615 bw ( KiB/s): min=55296, max=67584, per=4.99%, avg=63302.40, stdev=3251.04, samples=20 00:21:31.615 iops : min= 216, max= 264, avg=247.20, stdev=12.73, samples=20 00:21:31.615 lat (msec) : 50=0.16%, 100=0.79%, 250=43.06%, 500=55.99% 00:21:31.615 cpu : usr=0.54%, sys=0.73%, ctx=2457, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,2536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job4: (groupid=0, jobs=1): err= 0: pid=102728: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=633, BW=158MiB/s (166MB/s)(1600MiB/10096msec); 0 zone resets 00:21:31.615 slat (usec): min=28, max=11010, avg=1557.22, stdev=2611.93 00:21:31.615 clat (msec): min=3, max=196, avg=99.36, stdev= 8.79 00:21:31.615 lat (msec): min=4, max=196, avg=100.91, stdev= 8.57 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 93], 20.00th=[ 96], 00:21:31.615 | 30.00th=[ 97], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 101], 00:21:31.615 | 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 107], 00:21:31.615 | 99.00th=[ 110], 99.50th=[ 144], 99.90th=[ 184], 99.95th=[ 190], 00:21:31.615 | 99.99th=[ 197] 00:21:31.615 bw ( KiB/s): min=151552, max=170155, per=12.77%, avg=162194.90, stdev=5720.98, samples=20 00:21:31.615 iops : min= 592, max= 664, avg=633.50, stdev=22.37, samples=20 00:21:31.615 lat (msec) : 4=0.02%, 10=0.09%, 20=0.06%, 50=0.31%, 100=57.06% 00:21:31.615 lat (msec) : 250=42.45% 00:21:31.615 cpu : usr=1.92%, sys=2.17%, ctx=7988, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,6400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job5: (groupid=0, jobs=1): err= 0: pid=102729: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=244, BW=61.2MiB/s (64.1MB/s)(625MiB/10221msec); 0 zone resets 00:21:31.615 slat (usec): min=22, max=87215, avg=3998.28, stdev=7269.69 00:21:31.615 clat (msec): min=3, max=459, avg=257.20, stdev=31.16 00:21:31.615 lat (msec): min=11, max=459, avg=261.20, stdev=30.74 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 171], 5.00th=[ 228], 10.00th=[ 232], 20.00th=[ 241], 00:21:31.615 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 257], 00:21:31.615 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 305], 00:21:31.615 | 99.00th=[ 363], 99.50th=[ 414], 99.90th=[ 443], 99.95th=[ 460], 00:21:31.615 | 99.99th=[ 460] 00:21:31.615 bw ( KiB/s): min=51608, max=67584, per=4.91%, avg=62400.85, stdev=4901.18, samples=20 00:21:31.615 iops : min= 201, max= 264, avg=243.70, stdev=19.19, samples=20 00:21:31.615 lat (msec) : 4=0.04%, 50=0.16%, 250=46.94%, 500=52.86% 00:21:31.615 cpu : usr=0.58%, sys=0.65%, ctx=2796, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,2501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job6: (groupid=0, jobs=1): err= 0: pid=102732: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=245, BW=61.4MiB/s (64.4MB/s)(627MiB/10209msec); 0 zone resets 00:21:31.615 slat (usec): min=21, max=61347, avg=3986.46, stdev=7148.48 00:21:31.615 clat (msec): min=64, max=441, avg=256.41, stdev=30.21 00:21:31.615 lat (msec): min=64, max=441, avg=260.39, stdev=29.87 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 133], 5.00th=[ 228], 10.00th=[ 232], 20.00th=[ 243], 00:21:31.615 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 259], 00:21:31.615 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 292], 00:21:31.615 | 99.00th=[ 351], 99.50th=[ 397], 99.90th=[ 426], 99.95th=[ 443], 00:21:31.615 | 99.99th=[ 443] 00:21:31.615 bw ( KiB/s): min=57229, max=67584, per=4.93%, avg=62572.95, stdev=4083.23, samples=20 00:21:31.615 iops : min= 223, max= 264, avg=244.35, stdev=15.94, samples=20 00:21:31.615 lat (msec) : 100=0.48%, 250=46.45%, 500=53.07% 00:21:31.615 cpu : usr=0.53%, sys=0.78%, ctx=2438, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,2508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job7: (groupid=0, jobs=1): err= 0: pid=102736: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=437, BW=109MiB/s (115MB/s)(1109MiB/10138msec); 0 zone resets 00:21:31.615 slat (usec): min=20, max=13476, avg=2248.85, stdev=3855.22 00:21:31.615 clat (msec): min=16, max=284, avg=143.80, stdev=18.56 00:21:31.615 lat (msec): min=16, max=284, avg=146.05, stdev=18.45 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 81], 5.00th=[ 102], 10.00th=[ 136], 20.00th=[ 140], 00:21:31.615 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:21:31.615 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 159], 00:21:31.615 | 99.00th=[ 178], 99.50th=[ 226], 99.90th=[ 275], 99.95th=[ 275], 00:21:31.615 | 99.99th=[ 284] 00:21:31.615 bw ( KiB/s): min=102400, max=149716, per=8.82%, avg=111927.00, stdev=9626.23, samples=20 00:21:31.615 iops : min= 400, max= 584, avg=437.15, stdev=37.45, samples=20 00:21:31.615 lat (msec) : 20=0.09%, 50=0.45%, 100=3.70%, 250=95.45%, 500=0.32% 00:21:31.615 cpu : usr=0.93%, sys=1.51%, ctx=5592, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,4437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job8: (groupid=0, jobs=1): err= 0: pid=102738: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=247, BW=62.0MiB/s (65.0MB/s)(633MiB/10214msec); 0 zone resets 00:21:31.615 slat (usec): min=22, max=62476, avg=3947.99, stdev=7053.12 00:21:31.615 clat (msec): min=35, max=445, avg=254.06, stdev=33.49 00:21:31.615 lat (msec): min=35, max=445, avg=258.00, stdev=33.27 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 110], 5.00th=[ 226], 10.00th=[ 230], 20.00th=[ 239], 00:21:31.615 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:21:31.615 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 296], 00:21:31.615 | 99.00th=[ 355], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 447], 00:21:31.615 | 99.99th=[ 447] 00:21:31.615 bw ( KiB/s): min=51200, max=69632, per=4.98%, avg=63200.00, stdev=4817.43, samples=20 00:21:31.615 iops : min= 200, max= 272, avg=246.80, stdev=18.84, samples=20 00:21:31.615 lat (msec) : 50=0.32%, 100=0.63%, 250=54.50%, 500=44.55% 00:21:31.615 cpu : usr=0.61%, sys=0.88%, ctx=2593, majf=0, minf=1 00:21:31.615 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:31.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.615 issued rwts: total=0,2532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.615 job9: (groupid=0, jobs=1): err= 0: pid=102739: Sat Jul 13 07:07:38 2024 00:21:31.615 write: IOPS=249, BW=62.5MiB/s (65.5MB/s)(638MiB/10212msec); 0 zone resets 00:21:31.615 slat (usec): min=21, max=110837, avg=3789.06, stdev=7031.47 00:21:31.615 clat (msec): min=35, max=452, avg=252.13, stdev=31.74 00:21:31.615 lat (msec): min=35, max=452, avg=255.92, stdev=31.57 00:21:31.615 clat percentiles (msec): 00:21:31.615 | 1.00th=[ 144], 5.00th=[ 226], 10.00th=[ 234], 20.00th=[ 241], 00:21:31.615 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:21:31.615 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 279], 00:21:31.615 | 99.00th=[ 355], 99.50th=[ 401], 99.90th=[ 435], 99.95th=[ 451], 00:21:31.615 | 99.99th=[ 451] 00:21:31.615 bw ( KiB/s): min=50688, max=74240, per=5.02%, avg=63718.15, stdev=4339.15, samples=20 00:21:31.615 iops : min= 198, max= 290, avg=248.85, stdev=16.96, samples=20 00:21:31.615 lat (msec) : 50=0.31%, 100=0.63%, 250=41.22%, 500=57.84% 00:21:31.616 cpu : usr=0.55%, sys=0.70%, ctx=3930, majf=0, minf=1 00:21:31.616 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:31.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.616 issued rwts: total=0,2552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.616 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.616 job10: (groupid=0, jobs=1): err= 0: pid=102740: Sat Jul 13 07:07:38 2024 00:21:31.616 write: IOPS=442, BW=111MiB/s (116MB/s)(1121MiB/10131msec); 0 zone resets 00:21:31.616 slat (usec): min=27, max=11697, avg=2205.79, stdev=3815.89 00:21:31.616 clat (msec): min=5, max=281, avg=142.27, stdev=21.50 00:21:31.616 lat (msec): min=5, max=281, avg=144.48, stdev=21.55 00:21:31.616 clat percentiles (msec): 00:21:31.616 | 1.00th=[ 55], 5.00th=[ 94], 10.00th=[ 134], 20.00th=[ 140], 00:21:31.616 | 30.00th=[ 142], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:21:31.616 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 159], 00:21:31.616 | 99.00th=[ 174], 99.50th=[ 222], 99.90th=[ 271], 99.95th=[ 271], 00:21:31.616 | 99.99th=[ 284] 00:21:31.616 bw ( KiB/s): min=102400, max=175967, per=8.92%, avg=113199.20, stdev=15241.38, samples=20 00:21:31.616 iops : min= 400, max= 687, avg=442.15, stdev=59.46, samples=20 00:21:31.616 lat (msec) : 10=0.09%, 20=0.18%, 50=0.62%, 100=6.98%, 250=91.82% 00:21:31.616 lat (msec) : 500=0.31% 00:21:31.616 cpu : usr=1.35%, sys=1.29%, ctx=6154, majf=0, minf=1 00:21:31.616 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:31.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:31.616 issued rwts: total=0,4485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.616 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:31.616 00:21:31.616 Run status group 0 (all jobs): 00:21:31.616 WRITE: bw=1240MiB/s (1300MB/s), 61.2MiB/s-296MiB/s (64.1MB/s-311MB/s), io=12.4GiB (13.3GB), run=10049-10221msec 00:21:31.616 00:21:31.616 Disk stats (read/write): 00:21:31.616 nvme0n1: ios=49/12631, merge=0/0, ticks=42/1212944, in_queue=1212986, util=97.66% 00:21:31.616 nvme10n1: ios=49/8732, merge=0/0, ticks=57/1211072, in_queue=1211129, util=97.94% 00:21:31.616 nvme1n1: ios=31/23623, merge=0/0, ticks=27/1215077, in_queue=1215104, util=97.76% 00:21:31.616 nvme2n1: ios=22/4938, merge=0/0, ticks=29/1206376, in_queue=1206405, util=97.99% 00:21:31.616 nvme3n1: ios=13/12649, merge=0/0, ticks=28/1213290, in_queue=1213318, util=98.03% 00:21:31.616 nvme4n1: ios=0/4869, merge=0/0, ticks=0/1205918, in_queue=1205918, util=98.24% 00:21:31.616 nvme5n1: ios=0/4874, merge=0/0, ticks=0/1205367, in_queue=1205367, util=98.25% 00:21:31.616 nvme6n1: ios=0/8733, merge=0/0, ticks=0/1211126, in_queue=1211126, util=98.37% 00:21:31.616 nvme7n1: ios=0/4926, merge=0/0, ticks=0/1205656, in_queue=1205656, util=98.65% 00:21:31.616 nvme8n1: ios=0/4967, merge=0/0, ticks=0/1207527, in_queue=1207527, util=98.76% 00:21:31.616 nvme9n1: ios=0/8825, merge=0/0, ticks=0/1211517, in_queue=1211517, util=98.86% 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:31.616 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:31.616 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:31.617 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:31.617 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:31.617 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.617 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:31.876 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.876 rmmod nvme_tcp 00:21:31.876 rmmod nvme_fabrics 00:21:31.876 rmmod nvme_keyring 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 102029 ']' 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 102029 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 102029 ']' 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 102029 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102029 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.876 killing process with pid 102029 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102029' 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 102029 00:21:31.876 07:07:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 102029 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:32.811 00:21:32.811 real 0m49.965s 00:21:32.811 user 2m49.774s 00:21:32.811 sys 0m23.563s 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.811 07:07:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:32.811 ************************************ 00:21:32.811 END TEST nvmf_multiconnection 00:21:32.811 ************************************ 00:21:32.811 07:07:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:32.811 07:07:40 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:32.811 07:07:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:32.811 07:07:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:32.811 07:07:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:32.811 ************************************ 00:21:32.811 START TEST nvmf_initiator_timeout 00:21:32.811 ************************************ 00:21:32.811 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:32.811 * Looking for test storage... 00:21:32.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:32.811 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:32.812 Cannot find device "nvmf_tgt_br" 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:32.812 Cannot find device "nvmf_tgt_br2" 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:32.812 Cannot find device "nvmf_tgt_br" 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:32.812 Cannot find device "nvmf_tgt_br2" 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:32.812 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.071 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:33.071 07:07:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:33.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:21:33.071 00:21:33.071 --- 10.0.0.2 ping statistics --- 00:21:33.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.071 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:33.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:21:33.071 00:21:33.071 --- 10.0.0.3 ping statistics --- 00:21:33.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.071 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:33.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:33.071 00:21:33.071 --- 10.0.0.1 ping statistics --- 00:21:33.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.071 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=103100 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 103100 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 103100 ']' 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.071 07:07:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:33.329 [2024-07-13 07:07:41.159964] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:21:33.330 [2024-07-13 07:07:41.160068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.330 [2024-07-13 07:07:41.299824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.587 [2024-07-13 07:07:41.431947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.587 [2024-07-13 07:07:41.432041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.587 [2024-07-13 07:07:41.432056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.587 [2024-07-13 07:07:41.432067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.587 [2024-07-13 07:07:41.432076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.587 [2024-07-13 07:07:41.432421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.587 [2024-07-13 07:07:41.432621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.587 [2024-07-13 07:07:41.433261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.587 [2024-07-13 07:07:41.433363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.154 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 Malloc0 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 Delay0 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 [2024-07-13 07:07:42.273511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:34.412 [2024-07-13 07:07:42.301936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:34.412 07:07:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=103183 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:36.938 07:07:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:36.938 [global] 00:21:36.938 thread=1 00:21:36.938 invalidate=1 00:21:36.938 rw=write 00:21:36.938 time_based=1 00:21:36.938 runtime=60 00:21:36.938 ioengine=libaio 00:21:36.938 direct=1 00:21:36.938 bs=4096 00:21:36.938 iodepth=1 00:21:36.938 norandommap=0 00:21:36.938 numjobs=1 00:21:36.938 00:21:36.938 verify_dump=1 00:21:36.938 verify_backlog=512 00:21:36.938 verify_state_save=0 00:21:36.938 do_verify=1 00:21:36.938 verify=crc32c-intel 00:21:36.938 [job0] 00:21:36.938 filename=/dev/nvme0n1 00:21:36.938 Could not set queue depth (nvme0n1) 00:21:36.938 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:36.938 fio-3.35 00:21:36.938 Starting 1 thread 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:39.465 true 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:39.465 true 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:39.465 true 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:39.465 true 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.465 07:07:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.779 true 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.779 true 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.779 true 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.779 true 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:42.779 07:07:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 103183 00:22:38.993 00:22:38.993 job0: (groupid=0, jobs=1): err= 0: pid=103209: Sat Jul 13 07:08:44 2024 00:22:38.993 read: IOPS=781, BW=3124KiB/s (3199kB/s)(183MiB/60000msec) 00:22:38.993 slat (nsec): min=12387, max=76512, avg=15104.66, stdev=3775.02 00:22:38.993 clat (usec): min=167, max=8175, avg=211.58, stdev=44.13 00:22:38.993 lat (usec): min=181, max=8188, avg=226.69, stdev=44.46 00:22:38.993 clat percentiles (usec): 00:22:38.993 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:22:38.993 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:22:38.993 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 249], 00:22:38.993 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 371], 00:22:38.993 | 99.99th=[ 807] 00:22:38.993 write: IOPS=785, BW=3140KiB/s (3216kB/s)(184MiB/60000msec); 0 zone resets 00:22:38.993 slat (usec): min=17, max=14994, avg=22.57, stdev=88.18 00:22:38.993 clat (usec): min=91, max=40600k, avg=1022.14, stdev=187064.22 00:22:38.993 lat (usec): min=146, max=40600k, avg=1044.70, stdev=187064.28 00:22:38.993 clat percentiles (usec): 00:22:38.993 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:22:38.993 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:22:38.993 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 194], 00:22:38.993 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 343], 99.95th=[ 404], 00:22:38.993 | 99.99th=[ 1237] 00:22:38.993 bw ( KiB/s): min= 1048, max=12064, per=100.00%, avg=9452.31, stdev=1911.60, samples=39 00:22:38.993 iops : min= 262, max= 3016, avg=2363.08, stdev=477.90, samples=39 00:22:38.993 lat (usec) : 100=0.01%, 250=97.56%, 500=2.41%, 750=0.01%, 1000=0.01% 00:22:38.993 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:22:38.993 cpu : usr=0.58%, sys=2.15%, ctx=94061, majf=0, minf=2 00:22:38.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.993 issued rwts: total=46866,47104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:38.993 00:22:38.993 Run status group 0 (all jobs): 00:22:38.993 READ: bw=3124KiB/s (3199kB/s), 3124KiB/s-3124KiB/s (3199kB/s-3199kB/s), io=183MiB (192MB), run=60000-60000msec 00:22:38.993 WRITE: bw=3140KiB/s (3216kB/s), 3140KiB/s-3140KiB/s (3216kB/s-3216kB/s), io=184MiB (193MB), run=60000-60000msec 00:22:38.993 00:22:38.993 Disk stats (read/write): 00:22:38.993 nvme0n1: ios=46847/46890, merge=0/0, ticks=10173/7959, in_queue=18132, util=99.53% 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:38.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:38.993 nvmf hotplug test: fio successful as expected 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.993 rmmod nvme_tcp 00:22:38.993 rmmod nvme_fabrics 00:22:38.993 rmmod nvme_keyring 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 103100 ']' 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 103100 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 103100 ']' 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 103100 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 103100 00:22:38.993 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.994 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.994 killing process with pid 103100 00:22:38.994 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 103100' 00:22:38.994 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 103100 00:22:38.994 07:08:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 103100 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:38.994 ************************************ 00:22:38.994 END TEST nvmf_initiator_timeout 00:22:38.994 ************************************ 00:22:38.994 00:22:38.994 real 1m4.685s 00:22:38.994 user 4m6.692s 00:22:38.994 sys 0m8.529s 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.994 07:08:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:38.994 07:08:45 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:22:38.994 07:08:45 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.994 07:08:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.994 07:08:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:38.994 07:08:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.994 07:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.994 ************************************ 00:22:38.994 START TEST nvmf_multicontroller 00:22:38.994 ************************************ 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:38.994 * Looking for test storage... 00:22:38.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:38.994 Cannot find device "nvmf_tgt_br" 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:38.994 Cannot find device "nvmf_tgt_br2" 00:22:38.994 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:38.995 Cannot find device "nvmf_tgt_br" 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:38.995 Cannot find device "nvmf_tgt_br2" 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:38.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:38.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:38.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:22:38.995 00:22:38.995 --- 10.0.0.2 ping statistics --- 00:22:38.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.995 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:38.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:38.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:22:38.995 00:22:38.995 --- 10.0.0.3 ping statistics --- 00:22:38.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.995 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:38.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:22:38.995 00:22:38.995 --- 10.0.0.1 ping statistics --- 00:22:38.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.995 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=104025 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 104025 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104025 ']' 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.995 07:08:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 [2024-07-13 07:08:45.940475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:38.995 [2024-07-13 07:08:45.940575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.995 [2024-07-13 07:08:46.084954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:38.995 [2024-07-13 07:08:46.196214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.995 [2024-07-13 07:08:46.196280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.995 [2024-07-13 07:08:46.196290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.995 [2024-07-13 07:08:46.196298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.995 [2024-07-13 07:08:46.196308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.995 [2024-07-13 07:08:46.196492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.995 [2024-07-13 07:08:46.196621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.995 [2024-07-13 07:08:46.196625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 [2024-07-13 07:08:46.911082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 Malloc0 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 [2024-07-13 07:08:46.988733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.995 [2024-07-13 07:08:46.996624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:38.995 07:08:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.996 07:08:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 Malloc1 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=104077 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 104077 /var/tmp/bdevperf.sock 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 104077 ']' 00:22:38.996 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.281 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.281 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.281 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.281 07:08:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.238 NVMe0n1 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.238 1 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.238 2024/07/13 07:08:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:22:40.238 request: 00:22:40.238 { 00:22:40.238 "method": "bdev_nvme_attach_controller", 00:22:40.238 "params": { 00:22:40.238 "name": "NVMe0", 00:22:40.238 "trtype": "tcp", 00:22:40.238 "traddr": "10.0.0.2", 00:22:40.238 "adrfam": "ipv4", 00:22:40.238 "trsvcid": "4420", 00:22:40.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.238 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:40.238 "hostaddr": "10.0.0.2", 00:22:40.238 "hostsvcid": "60000", 00:22:40.238 "prchk_reftag": false, 00:22:40.238 "prchk_guard": false, 00:22:40.238 "hdgst": false, 00:22:40.238 "ddgst": false 00:22:40.238 } 00:22:40.238 } 00:22:40.238 Got JSON-RPC error response 00:22:40.238 GoRPCClient: error on JSON-RPC call 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.238 2024/07/13 07:08:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:22:40.238 request: 00:22:40.238 { 00:22:40.238 "method": "bdev_nvme_attach_controller", 00:22:40.238 "params": { 00:22:40.238 "name": "NVMe0", 00:22:40.238 "trtype": "tcp", 00:22:40.238 "traddr": "10.0.0.2", 00:22:40.238 "adrfam": "ipv4", 00:22:40.238 "trsvcid": "4420", 00:22:40.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.238 "hostaddr": "10.0.0.2", 00:22:40.238 "hostsvcid": "60000", 00:22:40.238 "prchk_reftag": false, 00:22:40.238 "prchk_guard": false, 00:22:40.238 "hdgst": false, 00:22:40.238 "ddgst": false 00:22:40.238 } 00:22:40.238 } 00:22:40.238 Got JSON-RPC error response 00:22:40.238 GoRPCClient: error on JSON-RPC call 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.238 2024/07/13 07:08:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:22:40.238 request: 00:22:40.238 { 00:22:40.238 "method": "bdev_nvme_attach_controller", 00:22:40.238 "params": { 00:22:40.238 "name": "NVMe0", 00:22:40.238 "trtype": "tcp", 00:22:40.238 "traddr": "10.0.0.2", 00:22:40.238 "adrfam": "ipv4", 00:22:40.238 "trsvcid": "4420", 00:22:40.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.238 "hostaddr": "10.0.0.2", 00:22:40.238 "hostsvcid": "60000", 00:22:40.238 "prchk_reftag": false, 00:22:40.238 "prchk_guard": false, 00:22:40.238 "hdgst": false, 00:22:40.238 "ddgst": false, 00:22:40.238 "multipath": "disable" 00:22:40.238 } 00:22:40.238 } 00:22:40.238 Got JSON-RPC error response 00:22:40.238 GoRPCClient: error on JSON-RPC call 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:40.238 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 2024/07/13 07:08:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:22:40.239 request: 00:22:40.239 { 00:22:40.239 "method": "bdev_nvme_attach_controller", 00:22:40.239 "params": { 00:22:40.239 "name": "NVMe0", 00:22:40.239 "trtype": "tcp", 00:22:40.239 "traddr": "10.0.0.2", 00:22:40.239 "adrfam": "ipv4", 00:22:40.239 "trsvcid": "4420", 00:22:40.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.239 "hostaddr": "10.0.0.2", 00:22:40.239 "hostsvcid": "60000", 00:22:40.239 "prchk_reftag": false, 00:22:40.239 "prchk_guard": false, 00:22:40.239 "hdgst": false, 00:22:40.239 "ddgst": false, 00:22:40.239 "multipath": "failover" 00:22:40.239 } 00:22:40.239 } 00:22:40.239 Got JSON-RPC error response 00:22:40.239 GoRPCClient: error on JSON-RPC call 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.498 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:40.498 07:08:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.430 0 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 104077 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104077 ']' 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104077 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104077 00:22:41.687 killing process with pid 104077 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104077' 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104077 00:22:41.687 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104077 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:41.946 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:22:41.946 [2024-07-13 07:08:47.120063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:41.946 [2024-07-13 07:08:47.120194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104077 ] 00:22:41.946 [2024-07-13 07:08:47.262674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.946 [2024-07-13 07:08:47.366325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.946 [2024-07-13 07:08:48.373391] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d2987e36-6117-4267-b1f8-7f3bb074fd13 already exists 00:22:41.946 [2024-07-13 07:08:48.373457] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d2987e36-6117-4267-b1f8-7f3bb074fd13 alias for bdev NVMe1n1 00:22:41.946 [2024-07-13 07:08:48.373474] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:41.946 Running I/O for 1 seconds... 00:22:41.946 00:22:41.946 Latency(us) 00:22:41.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.946 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:41.946 NVMe0n1 : 1.01 22357.91 87.34 0.00 0.00 5710.24 2129.92 10307.03 00:22:41.946 =================================================================================================================== 00:22:41.946 Total : 22357.91 87.34 0.00 0.00 5710.24 2129.92 10307.03 00:22:41.946 Received shutdown signal, test time was about 1.000000 seconds 00:22:41.946 00:22:41.946 Latency(us) 00:22:41.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.946 =================================================================================================================== 00:22:41.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.946 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.946 rmmod nvme_tcp 00:22:41.946 rmmod nvme_fabrics 00:22:41.946 rmmod nvme_keyring 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 104025 ']' 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 104025 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 104025 ']' 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 104025 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104025 00:22:41.946 killing process with pid 104025 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104025' 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 104025 00:22:41.946 07:08:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 104025 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:42.511 00:22:42.511 real 0m4.919s 00:22:42.511 user 0m15.191s 00:22:42.511 sys 0m1.134s 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.511 07:08:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.511 ************************************ 00:22:42.511 END TEST nvmf_multicontroller 00:22:42.511 ************************************ 00:22:42.511 07:08:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.511 07:08:50 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.511 07:08:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.511 07:08:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.511 07:08:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.511 ************************************ 00:22:42.511 START TEST nvmf_aer 00:22:42.511 ************************************ 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:42.511 * Looking for test storage... 00:22:42.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:42.511 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:42.512 Cannot find device "nvmf_tgt_br" 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:42.512 Cannot find device "nvmf_tgt_br2" 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:42.512 Cannot find device "nvmf_tgt_br" 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:22:42.512 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:42.770 Cannot find device "nvmf_tgt_br2" 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:42.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:42.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:42.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:22:42.770 00:22:42.770 --- 10.0.0.2 ping statistics --- 00:22:42.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.770 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:42.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:42.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:22:42.770 00:22:42.770 --- 10.0.0.3 ping statistics --- 00:22:42.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.770 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:42.770 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:42.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:22:42.770 00:22:42.770 --- 10.0.0.1 ping statistics --- 00:22:42.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.770 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=104324 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 104324 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 104324 ']' 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.028 07:08:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.028 [2024-07-13 07:08:50.933411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:43.028 [2024-07-13 07:08:50.933519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.028 [2024-07-13 07:08:51.075883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.286 [2024-07-13 07:08:51.176615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.286 [2024-07-13 07:08:51.176873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.286 [2024-07-13 07:08:51.176975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.286 [2024-07-13 07:08:51.177065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.286 [2024-07-13 07:08:51.177150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.286 [2024-07-13 07:08:51.177353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.286 [2024-07-13 07:08:51.177740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.286 [2024-07-13 07:08:51.177979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.286 [2024-07-13 07:08:51.178041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 [2024-07-13 07:08:51.974571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.222 07:08:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 Malloc0 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 [2024-07-13 07:08:52.044978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.222 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.222 [ 00:22:44.222 { 00:22:44.222 "allow_any_host": true, 00:22:44.222 "hosts": [], 00:22:44.222 "listen_addresses": [], 00:22:44.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:44.222 "subtype": "Discovery" 00:22:44.222 }, 00:22:44.222 { 00:22:44.222 "allow_any_host": true, 00:22:44.222 "hosts": [], 00:22:44.222 "listen_addresses": [ 00:22:44.222 { 00:22:44.222 "adrfam": "IPv4", 00:22:44.222 "traddr": "10.0.0.2", 00:22:44.222 "trsvcid": "4420", 00:22:44.222 "trtype": "TCP" 00:22:44.222 } 00:22:44.222 ], 00:22:44.222 "max_cntlid": 65519, 00:22:44.222 "max_namespaces": 2, 00:22:44.222 "min_cntlid": 1, 00:22:44.222 "model_number": "SPDK bdev Controller", 00:22:44.222 "namespaces": [ 00:22:44.222 { 00:22:44.222 "bdev_name": "Malloc0", 00:22:44.222 "name": "Malloc0", 00:22:44.222 "nguid": "5D4D77F6481947D4B95A98D33BD5C2E3", 00:22:44.222 "nsid": 1, 00:22:44.222 "uuid": "5d4d77f6-4819-47d4-b95a-98d33bd5c2e3" 00:22:44.222 } 00:22:44.222 ], 00:22:44.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.222 "serial_number": "SPDK00000000000001", 00:22:44.222 "subtype": "NVMe" 00:22:44.222 } 00:22:44.222 ] 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=104384 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.223 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.481 Malloc1 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.481 [ 00:22:44.481 { 00:22:44.481 "allow_any_host": true, 00:22:44.481 "hosts": [], 00:22:44.481 "listen_addresses": [], 00:22:44.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:44.481 "subtype": "Discovery" 00:22:44.481 }, 00:22:44.481 { 00:22:44.481 "allow_any_host": true, 00:22:44.481 "hosts": [], 00:22:44.481 "listen_addresses": [ 00:22:44.481 { 00:22:44.481 "adrfam": "IPv4", 00:22:44.481 "traddr": "10.0.0.2", 00:22:44.481 "trsvcid": "4420", 00:22:44.481 "trtype": "TCP" 00:22:44.481 } 00:22:44.481 ], 00:22:44.481 "max_cntlid": 65519, 00:22:44.481 "max_namespaces": 2, 00:22:44.481 "min_cntlid": 1, 00:22:44.481 "model_number": "SPDK bdev Controller", 00:22:44.481 "namespaces": [ 00:22:44.481 { 00:22:44.481 "bdev_name": "Malloc0", 00:22:44.481 "name": "Malloc0", 00:22:44.481 "nguid": "5D4D77F6481947D4B95A98D33BD5C2E3", 00:22:44.481 "nsid": 1, 00:22:44.481 "uuid": "5d4d77f6-4819-47d4-b95a-98d33bd5c2e3" 00:22:44.481 }, 00:22:44.481 { 00:22:44.481 "bdev_name": "Malloc1", 00:22:44.481 "name": "Malloc1", 00:22:44.481 "nguid": "80B124708A1A42EE81972C57D73041F1", 00:22:44.481 "nsid": 2, 00:22:44.481 "uuid": "80b12470-8a1a-42ee-8197-2c57d73041f1" 00:22:44.481 } 00:22:44.481 ], 00:22:44.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.481 "serial_number": "SPDK00000000000001", 00:22:44.481 "subtype": "NVMe" 00:22:44.481 } 00:22:44.481 ] 00:22:44.481 Asynchronous Event Request test 00:22:44.481 Attaching to 10.0.0.2 00:22:44.481 Attached to 10.0.0.2 00:22:44.481 Registering asynchronous event callbacks... 00:22:44.481 Starting namespace attribute notice tests for all controllers... 00:22:44.481 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:44.481 aer_cb - Changed Namespace 00:22:44.481 Cleaning up... 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 104384 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.481 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.482 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.482 rmmod nvme_tcp 00:22:44.482 rmmod nvme_fabrics 00:22:44.482 rmmod nvme_keyring 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 104324 ']' 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 104324 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 104324 ']' 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 104324 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104324 00:22:44.740 killing process with pid 104324 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104324' 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 104324 00:22:44.740 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 104324 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:44.998 00:22:44.998 real 0m2.471s 00:22:44.998 user 0m6.826s 00:22:44.998 sys 0m0.663s 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.998 07:08:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:44.998 ************************************ 00:22:44.998 END TEST nvmf_aer 00:22:44.998 ************************************ 00:22:44.998 07:08:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.998 07:08:52 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:44.999 07:08:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.999 07:08:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.999 07:08:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.999 ************************************ 00:22:44.999 START TEST nvmf_async_init 00:22:44.999 ************************************ 00:22:44.999 07:08:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:44.999 * Looking for test storage... 00:22:44.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=05a115755bde45c4b0534f27aa0db554 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:44.999 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:45.257 Cannot find device "nvmf_tgt_br" 00:22:45.257 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:22:45.257 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.257 Cannot find device "nvmf_tgt_br2" 00:22:45.257 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:45.258 Cannot find device "nvmf_tgt_br" 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:45.258 Cannot find device "nvmf_tgt_br2" 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:45.258 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:45.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:45.516 00:22:45.516 --- 10.0.0.2 ping statistics --- 00:22:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.516 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:45.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:45.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:45.516 00:22:45.516 --- 10.0.0.3 ping statistics --- 00:22:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.516 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:45.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:45.516 00:22:45.516 --- 10.0.0.1 ping statistics --- 00:22:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.516 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=104552 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 104552 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 104552 ']' 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.516 07:08:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.516 [2024-07-13 07:08:53.494830] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:45.516 [2024-07-13 07:08:53.494950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.773 [2024-07-13 07:08:53.629022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.773 [2024-07-13 07:08:53.713721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.773 [2024-07-13 07:08:53.713768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.773 [2024-07-13 07:08:53.713779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.773 [2024-07-13 07:08:53.713786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.773 [2024-07-13 07:08:53.713793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.773 [2024-07-13 07:08:53.713815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.706 [2024-07-13 07:08:54.472268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.706 null0 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 05a115755bde45c4b0534f27aa0db554 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:46.706 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.707 [2024-07-13 07:08:54.512347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.707 nvme0n1 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.707 [ 00:22:46.707 { 00:22:46.707 "aliases": [ 00:22:46.707 "05a11575-5bde-45c4-b053-4f27aa0db554" 00:22:46.707 ], 00:22:46.707 "assigned_rate_limits": { 00:22:46.707 "r_mbytes_per_sec": 0, 00:22:46.707 "rw_ios_per_sec": 0, 00:22:46.707 "rw_mbytes_per_sec": 0, 00:22:46.707 "w_mbytes_per_sec": 0 00:22:46.707 }, 00:22:46.707 "block_size": 512, 00:22:46.707 "claimed": false, 00:22:46.707 "driver_specific": { 00:22:46.707 "mp_policy": "active_passive", 00:22:46.707 "nvme": [ 00:22:46.707 { 00:22:46.707 "ctrlr_data": { 00:22:46.707 "ana_reporting": false, 00:22:46.707 "cntlid": 1, 00:22:46.707 "firmware_revision": "24.09", 00:22:46.707 "model_number": "SPDK bdev Controller", 00:22:46.707 "multi_ctrlr": true, 00:22:46.707 "oacs": { 00:22:46.707 "firmware": 0, 00:22:46.707 "format": 0, 00:22:46.707 "ns_manage": 0, 00:22:46.707 "security": 0 00:22:46.707 }, 00:22:46.707 "serial_number": "00000000000000000000", 00:22:46.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:46.707 "vendor_id": "0x8086" 00:22:46.707 }, 00:22:46.707 "ns_data": { 00:22:46.707 "can_share": true, 00:22:46.707 "id": 1 00:22:46.707 }, 00:22:46.707 "trid": { 00:22:46.707 "adrfam": "IPv4", 00:22:46.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:46.707 "traddr": "10.0.0.2", 00:22:46.707 "trsvcid": "4420", 00:22:46.707 "trtype": "TCP" 00:22:46.707 }, 00:22:46.707 "vs": { 00:22:46.707 "nvme_version": "1.3" 00:22:46.707 } 00:22:46.707 } 00:22:46.707 ] 00:22:46.707 }, 00:22:46.707 "memory_domains": [ 00:22:46.707 { 00:22:46.707 "dma_device_id": "system", 00:22:46.707 "dma_device_type": 1 00:22:46.707 } 00:22:46.707 ], 00:22:46.707 "name": "nvme0n1", 00:22:46.707 "num_blocks": 2097152, 00:22:46.707 "product_name": "NVMe disk", 00:22:46.707 "supported_io_types": { 00:22:46.707 "abort": true, 00:22:46.707 "compare": true, 00:22:46.707 "compare_and_write": true, 00:22:46.707 "copy": true, 00:22:46.707 "flush": true, 00:22:46.707 "get_zone_info": false, 00:22:46.707 "nvme_admin": true, 00:22:46.707 "nvme_io": true, 00:22:46.707 "nvme_io_md": false, 00:22:46.707 "nvme_iov_md": false, 00:22:46.707 "read": true, 00:22:46.707 "reset": true, 00:22:46.707 "seek_data": false, 00:22:46.707 "seek_hole": false, 00:22:46.707 "unmap": false, 00:22:46.707 "write": true, 00:22:46.707 "write_zeroes": true, 00:22:46.707 "zcopy": false, 00:22:46.707 "zone_append": false, 00:22:46.707 "zone_management": false 00:22:46.707 }, 00:22:46.707 "uuid": "05a11575-5bde-45c4-b053-4f27aa0db554", 00:22:46.707 "zoned": false 00:22:46.707 } 00:22:46.707 ] 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.707 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.965 [2024-07-13 07:08:54.782028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:46.965 [2024-07-13 07:08:54.782144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1207540 (9): Bad file descriptor 00:22:46.965 [2024-07-13 07:08:54.913703] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:46.965 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.965 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:46.965 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.965 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.965 [ 00:22:46.965 { 00:22:46.965 "aliases": [ 00:22:46.965 "05a11575-5bde-45c4-b053-4f27aa0db554" 00:22:46.965 ], 00:22:46.965 "assigned_rate_limits": { 00:22:46.965 "r_mbytes_per_sec": 0, 00:22:46.965 "rw_ios_per_sec": 0, 00:22:46.965 "rw_mbytes_per_sec": 0, 00:22:46.965 "w_mbytes_per_sec": 0 00:22:46.965 }, 00:22:46.965 "block_size": 512, 00:22:46.965 "claimed": false, 00:22:46.965 "driver_specific": { 00:22:46.965 "mp_policy": "active_passive", 00:22:46.965 "nvme": [ 00:22:46.965 { 00:22:46.965 "ctrlr_data": { 00:22:46.965 "ana_reporting": false, 00:22:46.965 "cntlid": 2, 00:22:46.965 "firmware_revision": "24.09", 00:22:46.965 "model_number": "SPDK bdev Controller", 00:22:46.966 "multi_ctrlr": true, 00:22:46.966 "oacs": { 00:22:46.966 "firmware": 0, 00:22:46.966 "format": 0, 00:22:46.966 "ns_manage": 0, 00:22:46.966 "security": 0 00:22:46.966 }, 00:22:46.966 "serial_number": "00000000000000000000", 00:22:46.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:46.966 "vendor_id": "0x8086" 00:22:46.966 }, 00:22:46.966 "ns_data": { 00:22:46.966 "can_share": true, 00:22:46.966 "id": 1 00:22:46.966 }, 00:22:46.966 "trid": { 00:22:46.966 "adrfam": "IPv4", 00:22:46.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:46.966 "traddr": "10.0.0.2", 00:22:46.966 "trsvcid": "4420", 00:22:46.966 "trtype": "TCP" 00:22:46.966 }, 00:22:46.966 "vs": { 00:22:46.966 "nvme_version": "1.3" 00:22:46.966 } 00:22:46.966 } 00:22:46.966 ] 00:22:46.966 }, 00:22:46.966 "memory_domains": [ 00:22:46.966 { 00:22:46.966 "dma_device_id": "system", 00:22:46.966 "dma_device_type": 1 00:22:46.966 } 00:22:46.966 ], 00:22:46.966 "name": "nvme0n1", 00:22:46.966 "num_blocks": 2097152, 00:22:46.966 "product_name": "NVMe disk", 00:22:46.966 "supported_io_types": { 00:22:46.966 "abort": true, 00:22:46.966 "compare": true, 00:22:46.966 "compare_and_write": true, 00:22:46.966 "copy": true, 00:22:46.966 "flush": true, 00:22:46.966 "get_zone_info": false, 00:22:46.966 "nvme_admin": true, 00:22:46.966 "nvme_io": true, 00:22:46.966 "nvme_io_md": false, 00:22:46.966 "nvme_iov_md": false, 00:22:46.966 "read": true, 00:22:46.966 "reset": true, 00:22:46.966 "seek_data": false, 00:22:46.966 "seek_hole": false, 00:22:46.966 "unmap": false, 00:22:46.966 "write": true, 00:22:46.966 "write_zeroes": true, 00:22:46.966 "zcopy": false, 00:22:46.966 "zone_append": false, 00:22:46.966 "zone_management": false 00:22:46.966 }, 00:22:46.966 "uuid": "05a11575-5bde-45c4-b053-4f27aa0db554", 00:22:46.966 "zoned": false 00:22:46.966 } 00:22:46.966 ] 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zM4OKFuytR 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zM4OKFuytR 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.966 [2024-07-13 07:08:54.982169] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.966 [2024-07-13 07:08:54.982385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zM4OKFuytR 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.966 [2024-07-13 07:08:54.990127] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zM4OKFuytR 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.966 07:08:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:46.966 [2024-07-13 07:08:54.998151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.966 [2024-07-13 07:08:54.998260] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:47.225 nvme0n1 00:22:47.225 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.225 07:08:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:47.226 [ 00:22:47.226 { 00:22:47.226 "aliases": [ 00:22:47.226 "05a11575-5bde-45c4-b053-4f27aa0db554" 00:22:47.226 ], 00:22:47.226 "assigned_rate_limits": { 00:22:47.226 "r_mbytes_per_sec": 0, 00:22:47.226 "rw_ios_per_sec": 0, 00:22:47.226 "rw_mbytes_per_sec": 0, 00:22:47.226 "w_mbytes_per_sec": 0 00:22:47.226 }, 00:22:47.226 "block_size": 512, 00:22:47.226 "claimed": false, 00:22:47.226 "driver_specific": { 00:22:47.226 "mp_policy": "active_passive", 00:22:47.226 "nvme": [ 00:22:47.226 { 00:22:47.226 "ctrlr_data": { 00:22:47.226 "ana_reporting": false, 00:22:47.226 "cntlid": 3, 00:22:47.226 "firmware_revision": "24.09", 00:22:47.226 "model_number": "SPDK bdev Controller", 00:22:47.226 "multi_ctrlr": true, 00:22:47.226 "oacs": { 00:22:47.226 "firmware": 0, 00:22:47.226 "format": 0, 00:22:47.226 "ns_manage": 0, 00:22:47.226 "security": 0 00:22:47.226 }, 00:22:47.226 "serial_number": "00000000000000000000", 00:22:47.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:47.226 "vendor_id": "0x8086" 00:22:47.226 }, 00:22:47.226 "ns_data": { 00:22:47.226 "can_share": true, 00:22:47.226 "id": 1 00:22:47.226 }, 00:22:47.226 "trid": { 00:22:47.226 "adrfam": "IPv4", 00:22:47.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:47.226 "traddr": "10.0.0.2", 00:22:47.226 "trsvcid": "4421", 00:22:47.226 "trtype": "TCP" 00:22:47.226 }, 00:22:47.226 "vs": { 00:22:47.226 "nvme_version": "1.3" 00:22:47.226 } 00:22:47.226 } 00:22:47.226 ] 00:22:47.226 }, 00:22:47.226 "memory_domains": [ 00:22:47.226 { 00:22:47.226 "dma_device_id": "system", 00:22:47.226 "dma_device_type": 1 00:22:47.226 } 00:22:47.226 ], 00:22:47.226 "name": "nvme0n1", 00:22:47.226 "num_blocks": 2097152, 00:22:47.226 "product_name": "NVMe disk", 00:22:47.226 "supported_io_types": { 00:22:47.226 "abort": true, 00:22:47.226 "compare": true, 00:22:47.226 "compare_and_write": true, 00:22:47.226 "copy": true, 00:22:47.226 "flush": true, 00:22:47.226 "get_zone_info": false, 00:22:47.226 "nvme_admin": true, 00:22:47.226 "nvme_io": true, 00:22:47.226 "nvme_io_md": false, 00:22:47.226 "nvme_iov_md": false, 00:22:47.226 "read": true, 00:22:47.226 "reset": true, 00:22:47.226 "seek_data": false, 00:22:47.226 "seek_hole": false, 00:22:47.226 "unmap": false, 00:22:47.226 "write": true, 00:22:47.226 "write_zeroes": true, 00:22:47.226 "zcopy": false, 00:22:47.226 "zone_append": false, 00:22:47.226 "zone_management": false 00:22:47.226 }, 00:22:47.226 "uuid": "05a11575-5bde-45c4-b053-4f27aa0db554", 00:22:47.226 "zoned": false 00:22:47.226 } 00:22:47.226 ] 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zM4OKFuytR 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.226 rmmod nvme_tcp 00:22:47.226 rmmod nvme_fabrics 00:22:47.226 rmmod nvme_keyring 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 104552 ']' 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 104552 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 104552 ']' 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 104552 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104552 00:22:47.226 killing process with pid 104552 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104552' 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 104552 00:22:47.226 [2024-07-13 07:08:55.251981] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.226 [2024-07-13 07:08:55.252018] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:47.226 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 104552 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:47.485 00:22:47.485 real 0m2.556s 00:22:47.485 user 0m2.293s 00:22:47.485 sys 0m0.631s 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.485 07:08:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:47.485 ************************************ 00:22:47.485 END TEST nvmf_async_init 00:22:47.485 ************************************ 00:22:47.485 07:08:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:47.485 07:08:55 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:47.485 07:08:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.485 07:08:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.485 07:08:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.485 ************************************ 00:22:47.485 START TEST dma 00:22:47.485 ************************************ 00:22:47.485 07:08:55 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:47.744 * Looking for test storage... 00:22:47.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:47.744 07:08:55 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.744 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.744 07:08:55 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.744 07:08:55 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.744 07:08:55 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.744 07:08:55 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:47.745 07:08:55 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.745 07:08:55 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.745 07:08:55 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:47.745 07:08:55 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:47.745 00:22:47.745 real 0m0.108s 00:22:47.745 user 0m0.048s 00:22:47.745 sys 0m0.063s 00:22:47.745 07:08:55 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.745 07:08:55 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:47.745 ************************************ 00:22:47.745 END TEST dma 00:22:47.745 ************************************ 00:22:47.745 07:08:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:47.745 07:08:55 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:47.745 07:08:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.745 07:08:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.745 07:08:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.745 ************************************ 00:22:47.745 START TEST nvmf_identify 00:22:47.745 ************************************ 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:47.745 * Looking for test storage... 00:22:47.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:47.745 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:47.746 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:48.004 Cannot find device "nvmf_tgt_br" 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:48.004 Cannot find device "nvmf_tgt_br2" 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:48.004 Cannot find device "nvmf_tgt_br" 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:48.004 Cannot find device "nvmf_tgt_br2" 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:48.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:48.004 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:48.004 07:08:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:48.004 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:48.262 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:48.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:22:48.263 00:22:48.263 --- 10.0.0.2 ping statistics --- 00:22:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.263 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:48.263 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:48.263 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:22:48.263 00:22:48.263 --- 10.0.0.3 ping statistics --- 00:22:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.263 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:48.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:22:48.263 00:22:48.263 --- 10.0.0.1 ping statistics --- 00:22:48.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.263 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=104824 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 104824 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 104824 ']' 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.263 07:08:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.263 [2024-07-13 07:08:56.239596] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:48.263 [2024-07-13 07:08:56.239703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.521 [2024-07-13 07:08:56.381880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.521 [2024-07-13 07:08:56.474420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.521 [2024-07-13 07:08:56.474483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.521 [2024-07-13 07:08:56.474493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.521 [2024-07-13 07:08:56.474501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.521 [2024-07-13 07:08:56.474508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.521 [2024-07-13 07:08:56.474665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.521 [2024-07-13 07:08:56.475232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.521 [2024-07-13 07:08:56.475808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.521 [2024-07-13 07:08:56.475822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.456 [2024-07-13 07:08:57.252421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.456 Malloc0 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.456 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.457 [2024-07-13 07:08:57.362541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.457 [ 00:22:49.457 { 00:22:49.457 "allow_any_host": true, 00:22:49.457 "hosts": [], 00:22:49.457 "listen_addresses": [ 00:22:49.457 { 00:22:49.457 "adrfam": "IPv4", 00:22:49.457 "traddr": "10.0.0.2", 00:22:49.457 "trsvcid": "4420", 00:22:49.457 "trtype": "TCP" 00:22:49.457 } 00:22:49.457 ], 00:22:49.457 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:49.457 "subtype": "Discovery" 00:22:49.457 }, 00:22:49.457 { 00:22:49.457 "allow_any_host": true, 00:22:49.457 "hosts": [], 00:22:49.457 "listen_addresses": [ 00:22:49.457 { 00:22:49.457 "adrfam": "IPv4", 00:22:49.457 "traddr": "10.0.0.2", 00:22:49.457 "trsvcid": "4420", 00:22:49.457 "trtype": "TCP" 00:22:49.457 } 00:22:49.457 ], 00:22:49.457 "max_cntlid": 65519, 00:22:49.457 "max_namespaces": 32, 00:22:49.457 "min_cntlid": 1, 00:22:49.457 "model_number": "SPDK bdev Controller", 00:22:49.457 "namespaces": [ 00:22:49.457 { 00:22:49.457 "bdev_name": "Malloc0", 00:22:49.457 "eui64": "ABCDEF0123456789", 00:22:49.457 "name": "Malloc0", 00:22:49.457 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:49.457 "nsid": 1, 00:22:49.457 "uuid": "09bea1c7-2b1f-44b0-8d21-cd97cd22ed26" 00:22:49.457 } 00:22:49.457 ], 00:22:49.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.457 "serial_number": "SPDK00000000000001", 00:22:49.457 "subtype": "NVMe" 00:22:49.457 } 00:22:49.457 ] 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.457 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:49.457 [2024-07-13 07:08:57.414411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:49.457 [2024-07-13 07:08:57.414480] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104876 ] 00:22:49.719 [2024-07-13 07:08:57.561438] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:49.719 [2024-07-13 07:08:57.561532] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:49.719 [2024-07-13 07:08:57.561539] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:49.719 [2024-07-13 07:08:57.561564] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:49.719 [2024-07-13 07:08:57.561574] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:49.719 [2024-07-13 07:08:57.561757] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:49.719 [2024-07-13 07:08:57.561815] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x220b6e0 0 00:22:49.719 [2024-07-13 07:08:57.575577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:49.719 [2024-07-13 07:08:57.575600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:49.719 [2024-07-13 07:08:57.575618] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:49.719 [2024-07-13 07:08:57.575621] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:49.719 [2024-07-13 07:08:57.575671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.575679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.575683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.719 [2024-07-13 07:08:57.575700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:49.719 [2024-07-13 07:08:57.575734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.719 [2024-07-13 07:08:57.583585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.719 [2024-07-13 07:08:57.583600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.719 [2024-07-13 07:08:57.583604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.583609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.719 [2024-07-13 07:08:57.583620] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:49.719 [2024-07-13 07:08:57.583627] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:49.719 [2024-07-13 07:08:57.583634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:49.719 [2024-07-13 07:08:57.583652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.583658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.583661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.719 [2024-07-13 07:08:57.583671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.719 [2024-07-13 07:08:57.583697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.719 [2024-07-13 07:08:57.583781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.719 [2024-07-13 07:08:57.583787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.719 [2024-07-13 07:08:57.583790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.583794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.719 [2024-07-13 07:08:57.583811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:49.719 [2024-07-13 07:08:57.583818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:49.719 [2024-07-13 07:08:57.583838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.583841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.719 [2024-07-13 07:08:57.583845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.719 [2024-07-13 07:08:57.583852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.719 [2024-07-13 07:08:57.583871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.719 [2024-07-13 07:08:57.583931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.720 [2024-07-13 07:08:57.583937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.720 [2024-07-13 07:08:57.583940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.583944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.720 [2024-07-13 07:08:57.583950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:49.720 [2024-07-13 07:08:57.583958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:49.720 [2024-07-13 07:08:57.583965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.583969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.583972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.720 [2024-07-13 07:08:57.583979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.720 [2024-07-13 07:08:57.583996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.720 [2024-07-13 07:08:57.584057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.720 [2024-07-13 07:08:57.584063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.720 [2024-07-13 07:08:57.584067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.720 [2024-07-13 07:08:57.584076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:49.720 [2024-07-13 07:08:57.584086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.720 [2024-07-13 07:08:57.584100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.720 [2024-07-13 07:08:57.584122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.720 [2024-07-13 07:08:57.584177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.720 [2024-07-13 07:08:57.584183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.720 [2024-07-13 07:08:57.584187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.720 [2024-07-13 07:08:57.584196] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:49.720 [2024-07-13 07:08:57.584201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:49.720 [2024-07-13 07:08:57.584208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:49.720 [2024-07-13 07:08:57.584313] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:49.720 [2024-07-13 07:08:57.584319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:49.720 [2024-07-13 07:08:57.584329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.720 [2024-07-13 07:08:57.584344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.720 [2024-07-13 07:08:57.584362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.720 [2024-07-13 07:08:57.584423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.720 [2024-07-13 07:08:57.584429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.720 [2024-07-13 07:08:57.584432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.720 [2024-07-13 07:08:57.584441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:49.720 [2024-07-13 07:08:57.584450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.720 [2024-07-13 07:08:57.584465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.720 [2024-07-13 07:08:57.584482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.720 [2024-07-13 07:08:57.584536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.720 [2024-07-13 07:08:57.584542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.720 [2024-07-13 07:08:57.584545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.720 [2024-07-13 07:08:57.584595] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:49.720 [2024-07-13 07:08:57.584601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:49.720 [2024-07-13 07:08:57.584609] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:49.720 [2024-07-13 07:08:57.584620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:49.720 [2024-07-13 07:08:57.584630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.720 [2024-07-13 07:08:57.584650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.720 [2024-07-13 07:08:57.584671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.720 [2024-07-13 07:08:57.584781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.720 [2024-07-13 07:08:57.584788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.720 [2024-07-13 07:08:57.584792] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584796] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x220b6e0): datao=0, datal=4096, cccid=0 00:22:49.720 [2024-07-13 07:08:57.584801] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2256ec0) on tqpair(0x220b6e0): expected_datao=0, payload_size=4096 00:22:49.720 [2024-07-13 07:08:57.584806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584814] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584818] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.720 [2024-07-13 07:08:57.584832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.720 [2024-07-13 07:08:57.584835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.720 [2024-07-13 07:08:57.584847] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:49.720 [2024-07-13 07:08:57.584853] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:49.720 [2024-07-13 07:08:57.584857] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:49.720 [2024-07-13 07:08:57.584863] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:49.720 [2024-07-13 07:08:57.584868] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:49.720 [2024-07-13 07:08:57.584873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:49.720 [2024-07-13 07:08:57.584882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:49.720 [2024-07-13 07:08:57.584889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.720 [2024-07-13 07:08:57.584897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.584905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.721 [2024-07-13 07:08:57.584924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.721 [2024-07-13 07:08:57.585008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.721 [2024-07-13 07:08:57.585015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.721 [2024-07-13 07:08:57.585018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.721 [2024-07-13 07:08:57.585031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.721 [2024-07-13 07:08:57.585050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.721 [2024-07-13 07:08:57.585068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.721 [2024-07-13 07:08:57.585086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.721 [2024-07-13 07:08:57.585102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:49.721 [2024-07-13 07:08:57.585115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:49.721 [2024-07-13 07:08:57.585122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.721 [2024-07-13 07:08:57.585151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2256ec0, cid 0, qid 0 00:22:49.721 [2024-07-13 07:08:57.585158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257040, cid 1, qid 0 00:22:49.721 [2024-07-13 07:08:57.585163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22571c0, cid 2, qid 0 00:22:49.721 [2024-07-13 07:08:57.585167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.721 [2024-07-13 07:08:57.585172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22574c0, cid 4, qid 0 00:22:49.721 [2024-07-13 07:08:57.585267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.721 [2024-07-13 07:08:57.585274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.721 [2024-07-13 07:08:57.585277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22574c0) on tqpair=0x220b6e0 00:22:49.721 [2024-07-13 07:08:57.585286] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:49.721 [2024-07-13 07:08:57.585296] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:49.721 [2024-07-13 07:08:57.585306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.721 [2024-07-13 07:08:57.585336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22574c0, cid 4, qid 0 00:22:49.721 [2024-07-13 07:08:57.585405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.721 [2024-07-13 07:08:57.585412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.721 [2024-07-13 07:08:57.585415] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x220b6e0): datao=0, datal=4096, cccid=4 00:22:49.721 [2024-07-13 07:08:57.585423] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22574c0) on tqpair(0x220b6e0): expected_datao=0, payload_size=4096 00:22:49.721 [2024-07-13 07:08:57.585427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585434] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585437] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.721 [2024-07-13 07:08:57.585450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.721 [2024-07-13 07:08:57.585453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22574c0) on tqpair=0x220b6e0 00:22:49.721 [2024-07-13 07:08:57.585470] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:49.721 [2024-07-13 07:08:57.585503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.721 [2024-07-13 07:08:57.585523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x220b6e0) 00:22:49.721 [2024-07-13 07:08:57.585536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.721 [2024-07-13 07:08:57.585561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22574c0, cid 4, qid 0 00:22:49.721 [2024-07-13 07:08:57.585568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257640, cid 5, qid 0 00:22:49.721 [2024-07-13 07:08:57.585685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.721 [2024-07-13 07:08:57.585693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.721 [2024-07-13 07:08:57.585697] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585700] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x220b6e0): datao=0, datal=1024, cccid=4 00:22:49.721 [2024-07-13 07:08:57.585705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22574c0) on tqpair(0x220b6e0): expected_datao=0, payload_size=1024 00:22:49.721 [2024-07-13 07:08:57.585709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585715] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585719] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.721 [2024-07-13 07:08:57.585729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.721 [2024-07-13 07:08:57.585733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.721 [2024-07-13 07:08:57.585736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257640) on tqpair=0x220b6e0 00:22:49.721 [2024-07-13 07:08:57.630568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.721 [2024-07-13 07:08:57.630587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.721 [2024-07-13 07:08:57.630591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22574c0) on tqpair=0x220b6e0 00:22:49.722 [2024-07-13 07:08:57.630614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x220b6e0) 00:22:49.722 [2024-07-13 07:08:57.630627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.722 [2024-07-13 07:08:57.630656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22574c0, cid 4, qid 0 00:22:49.722 [2024-07-13 07:08:57.630752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.722 [2024-07-13 07:08:57.630759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.722 [2024-07-13 07:08:57.630762] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630766] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x220b6e0): datao=0, datal=3072, cccid=4 00:22:49.722 [2024-07-13 07:08:57.630776] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22574c0) on tqpair(0x220b6e0): expected_datao=0, payload_size=3072 00:22:49.722 [2024-07-13 07:08:57.630780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630787] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630791] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.722 [2024-07-13 07:08:57.630803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.722 [2024-07-13 07:08:57.630807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22574c0) on tqpair=0x220b6e0 00:22:49.722 [2024-07-13 07:08:57.630820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x220b6e0) 00:22:49.722 [2024-07-13 07:08:57.630831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.722 [2024-07-13 07:08:57.630855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22574c0, cid 4, qid 0 00:22:49.722 [2024-07-13 07:08:57.630929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.722 [2024-07-13 07:08:57.630935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.722 [2024-07-13 07:08:57.630939] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630942] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x220b6e0): datao=0, datal=8, cccid=4 00:22:49.722 [2024-07-13 07:08:57.630946] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22574c0) on tqpair(0x220b6e0): expected_datao=0, payload_size=8 00:22:49.722 [2024-07-13 07:08:57.630961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630967] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.630971] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.722 ===================================================== 00:22:49.722 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:49.722 ===================================================== 00:22:49.722 Controller Capabilities/Features 00:22:49.722 ================================ 00:22:49.722 Vendor ID: 0000 00:22:49.722 Subsystem Vendor ID: 0000 00:22:49.722 Serial Number: .................... 00:22:49.722 Model Number: ........................................ 00:22:49.722 Firmware Version: 24.09 00:22:49.722 Recommended Arb Burst: 0 00:22:49.722 IEEE OUI Identifier: 00 00 00 00:22:49.722 Multi-path I/O 00:22:49.722 May have multiple subsystem ports: No 00:22:49.722 May have multiple controllers: No 00:22:49.722 Associated with SR-IOV VF: No 00:22:49.722 Max Data Transfer Size: 131072 00:22:49.722 Max Number of Namespaces: 0 00:22:49.722 Max Number of I/O Queues: 1024 00:22:49.722 NVMe Specification Version (VS): 1.3 00:22:49.722 NVMe Specification Version (Identify): 1.3 00:22:49.722 Maximum Queue Entries: 128 00:22:49.722 Contiguous Queues Required: Yes 00:22:49.722 Arbitration Mechanisms Supported 00:22:49.722 Weighted Round Robin: Not Supported 00:22:49.722 Vendor Specific: Not Supported 00:22:49.722 Reset Timeout: 15000 ms 00:22:49.722 Doorbell Stride: 4 bytes 00:22:49.722 NVM Subsystem Reset: Not Supported 00:22:49.722 Command Sets Supported 00:22:49.722 NVM Command Set: Supported 00:22:49.722 Boot Partition: Not Supported 00:22:49.722 Memory Page Size Minimum: 4096 bytes 00:22:49.722 Memory Page Size Maximum: 4096 bytes 00:22:49.722 Persistent Memory Region: Not Supported 00:22:49.722 Optional Asynchronous Events Supported 00:22:49.722 Namespace Attribute Notices: Not Supported 00:22:49.722 Firmware Activation Notices: Not Supported 00:22:49.722 ANA Change Notices: Not Supported 00:22:49.722 PLE Aggregate Log Change Notices: Not Supported 00:22:49.722 LBA Status Info Alert Notices: Not Supported 00:22:49.722 EGE Aggregate Log Change Notices: Not Supported 00:22:49.722 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.722 Zone Descriptor Change Notices: Not Supported 00:22:49.722 Discovery Log Change Notices: Supported 00:22:49.722 Controller Attributes 00:22:49.722 128-bit Host Identifier: Not Supported 00:22:49.722 Non-Operational Permissive Mode: Not Supported 00:22:49.722 NVM Sets: Not Supported 00:22:49.722 Read Recovery Levels: Not Supported 00:22:49.722 Endurance Groups: Not Supported 00:22:49.722 Predictable Latency Mode: Not Supported 00:22:49.722 Traffic Based Keep ALive: Not Supported 00:22:49.722 Namespace Granularity: Not Supported 00:22:49.722 SQ Associations: Not Supported 00:22:49.722 UUID List: Not Supported 00:22:49.722 Multi-Domain Subsystem: Not Supported 00:22:49.722 Fixed Capacity Management: Not Supported 00:22:49.722 Variable Capacity Management: Not Supported 00:22:49.722 Delete Endurance Group: Not Supported 00:22:49.722 Delete NVM Set: Not Supported 00:22:49.722 Extended LBA Formats Supported: Not Supported 00:22:49.722 Flexible Data Placement Supported: Not Supported 00:22:49.722 00:22:49.722 Controller Memory Buffer Support 00:22:49.722 ================================ 00:22:49.722 Supported: No 00:22:49.722 00:22:49.722 Persistent Memory Region Support 00:22:49.722 ================================ 00:22:49.722 Supported: No 00:22:49.722 00:22:49.722 Admin Command Set Attributes 00:22:49.722 ============================ 00:22:49.722 Security Send/Receive: Not Supported 00:22:49.722 Format NVM: Not Supported 00:22:49.722 Firmware Activate/Download: Not Supported 00:22:49.722 Namespace Management: Not Supported 00:22:49.722 Device Self-Test: Not Supported 00:22:49.722 Directives: Not Supported 00:22:49.722 NVMe-MI: Not Supported 00:22:49.722 Virtualization Management: Not Supported 00:22:49.722 Doorbell Buffer Config: Not Supported 00:22:49.722 Get LBA Status Capability: Not Supported 00:22:49.722 Command & Feature Lockdown Capability: Not Supported 00:22:49.722 Abort Command Limit: 1 00:22:49.722 Async Event Request Limit: 4 00:22:49.722 Number of Firmware Slots: N/A 00:22:49.722 Firmware Slot 1 Read-Only: N/A 00:22:49.722 Firm[2024-07-13 07:08:57.672615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.722 [2024-07-13 07:08:57.672635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.722 [2024-07-13 07:08:57.672639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.722 [2024-07-13 07:08:57.672655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22574c0) on tqpair=0x220b6e0 00:22:49.722 ware Activation Without Reset: N/A 00:22:49.722 Multiple Update Detection Support: N/A 00:22:49.722 Firmware Update Granularity: No Information Provided 00:22:49.722 Per-Namespace SMART Log: No 00:22:49.722 Asymmetric Namespace Access Log Page: Not Supported 00:22:49.722 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:49.722 Command Effects Log Page: Not Supported 00:22:49.723 Get Log Page Extended Data: Supported 00:22:49.723 Telemetry Log Pages: Not Supported 00:22:49.723 Persistent Event Log Pages: Not Supported 00:22:49.723 Supported Log Pages Log Page: May Support 00:22:49.723 Commands Supported & Effects Log Page: Not Supported 00:22:49.723 Feature Identifiers & Effects Log Page:May Support 00:22:49.723 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.723 Data Area 4 for Telemetry Log: Not Supported 00:22:49.723 Error Log Page Entries Supported: 128 00:22:49.723 Keep Alive: Not Supported 00:22:49.723 00:22:49.723 NVM Command Set Attributes 00:22:49.723 ========================== 00:22:49.723 Submission Queue Entry Size 00:22:49.723 Max: 1 00:22:49.723 Min: 1 00:22:49.723 Completion Queue Entry Size 00:22:49.723 Max: 1 00:22:49.723 Min: 1 00:22:49.723 Number of Namespaces: 0 00:22:49.723 Compare Command: Not Supported 00:22:49.723 Write Uncorrectable Command: Not Supported 00:22:49.723 Dataset Management Command: Not Supported 00:22:49.723 Write Zeroes Command: Not Supported 00:22:49.723 Set Features Save Field: Not Supported 00:22:49.723 Reservations: Not Supported 00:22:49.723 Timestamp: Not Supported 00:22:49.723 Copy: Not Supported 00:22:49.723 Volatile Write Cache: Not Present 00:22:49.723 Atomic Write Unit (Normal): 1 00:22:49.723 Atomic Write Unit (PFail): 1 00:22:49.723 Atomic Compare & Write Unit: 1 00:22:49.723 Fused Compare & Write: Supported 00:22:49.723 Scatter-Gather List 00:22:49.723 SGL Command Set: Supported 00:22:49.723 SGL Keyed: Supported 00:22:49.723 SGL Bit Bucket Descriptor: Not Supported 00:22:49.723 SGL Metadata Pointer: Not Supported 00:22:49.723 Oversized SGL: Not Supported 00:22:49.723 SGL Metadata Address: Not Supported 00:22:49.723 SGL Offset: Supported 00:22:49.723 Transport SGL Data Block: Not Supported 00:22:49.723 Replay Protected Memory Block: Not Supported 00:22:49.723 00:22:49.723 Firmware Slot Information 00:22:49.723 ========================= 00:22:49.723 Active slot: 0 00:22:49.723 00:22:49.723 00:22:49.723 Error Log 00:22:49.723 ========= 00:22:49.723 00:22:49.723 Active Namespaces 00:22:49.723 ================= 00:22:49.723 Discovery Log Page 00:22:49.723 ================== 00:22:49.723 Generation Counter: 2 00:22:49.723 Number of Records: 2 00:22:49.723 Record Format: 0 00:22:49.723 00:22:49.723 Discovery Log Entry 0 00:22:49.723 ---------------------- 00:22:49.723 Transport Type: 3 (TCP) 00:22:49.723 Address Family: 1 (IPv4) 00:22:49.723 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:49.723 Entry Flags: 00:22:49.723 Duplicate Returned Information: 1 00:22:49.723 Explicit Persistent Connection Support for Discovery: 1 00:22:49.723 Transport Requirements: 00:22:49.723 Secure Channel: Not Required 00:22:49.723 Port ID: 0 (0x0000) 00:22:49.723 Controller ID: 65535 (0xffff) 00:22:49.723 Admin Max SQ Size: 128 00:22:49.723 Transport Service Identifier: 4420 00:22:49.723 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:49.723 Transport Address: 10.0.0.2 00:22:49.723 Discovery Log Entry 1 00:22:49.723 ---------------------- 00:22:49.723 Transport Type: 3 (TCP) 00:22:49.723 Address Family: 1 (IPv4) 00:22:49.723 Subsystem Type: 2 (NVM Subsystem) 00:22:49.723 Entry Flags: 00:22:49.723 Duplicate Returned Information: 0 00:22:49.723 Explicit Persistent Connection Support for Discovery: 0 00:22:49.723 Transport Requirements: 00:22:49.723 Secure Channel: Not Required 00:22:49.723 Port ID: 0 (0x0000) 00:22:49.723 Controller ID: 65535 (0xffff) 00:22:49.723 Admin Max SQ Size: 128 00:22:49.723 Transport Service Identifier: 4420 00:22:49.723 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:49.723 Transport Address: 10.0.0.2 [2024-07-13 07:08:57.672843] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:49.723 [2024-07-13 07:08:57.672861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2256ec0) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.672869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.723 [2024-07-13 07:08:57.672877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257040) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.672881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.723 [2024-07-13 07:08:57.672886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22571c0) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.672890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.723 [2024-07-13 07:08:57.672895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.672900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.723 [2024-07-13 07:08:57.672910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.672914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.672917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.723 [2024-07-13 07:08:57.672925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.723 [2024-07-13 07:08:57.672953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.723 [2024-07-13 07:08:57.673022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.723 [2024-07-13 07:08:57.673029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.723 [2024-07-13 07:08:57.673032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.673044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.723 [2024-07-13 07:08:57.673067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.723 [2024-07-13 07:08:57.673089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.723 [2024-07-13 07:08:57.673183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.723 [2024-07-13 07:08:57.673189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.723 [2024-07-13 07:08:57.673193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.673202] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:49.723 [2024-07-13 07:08:57.673207] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:49.723 [2024-07-13 07:08:57.673216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.723 [2024-07-13 07:08:57.673231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.723 [2024-07-13 07:08:57.673248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.723 [2024-07-13 07:08:57.673305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.723 [2024-07-13 07:08:57.673311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.723 [2024-07-13 07:08:57.673315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.723 [2024-07-13 07:08:57.673329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.723 [2024-07-13 07:08:57.673337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.673343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.673360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.673425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.673431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.673434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.673448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.673462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.673479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.673543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.673563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.673568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.673582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.673598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.673617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.673682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.673689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.673693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.673706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.673721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.673738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.673797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.673803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.673806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.673820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.673834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.673851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.673910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.673917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.673920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.673934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.673941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.673948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.673964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.674021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.674028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.674031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.674044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.674059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.674075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.674128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.674134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.674138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.674151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.674165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.674182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.674262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.674269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.674273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.674286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.674301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.674319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.674376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.674385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.674388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.674402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.724 [2024-07-13 07:08:57.674417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.724 [2024-07-13 07:08:57.674434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.724 [2024-07-13 07:08:57.674497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.724 [2024-07-13 07:08:57.674509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.724 [2024-07-13 07:08:57.674513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.724 [2024-07-13 07:08:57.674527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.724 [2024-07-13 07:08:57.674531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.725 [2024-07-13 07:08:57.674535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.725 [2024-07-13 07:08:57.674542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-13 07:08:57.678562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.725 [2024-07-13 07:08:57.678586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.725 [2024-07-13 07:08:57.678606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.725 [2024-07-13 07:08:57.678610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.725 [2024-07-13 07:08:57.678614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.725 [2024-07-13 07:08:57.678626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.725 [2024-07-13 07:08:57.678631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.725 [2024-07-13 07:08:57.678635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x220b6e0) 00:22:49.725 [2024-07-13 07:08:57.678643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.725 [2024-07-13 07:08:57.678666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2257340, cid 3, qid 0 00:22:49.725 [2024-07-13 07:08:57.678729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.725 [2024-07-13 07:08:57.678735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.725 [2024-07-13 07:08:57.678739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.725 [2024-07-13 07:08:57.678743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2257340) on tqpair=0x220b6e0 00:22:49.725 [2024-07-13 07:08:57.678751] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:49.725 00:22:49.725 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:49.725 [2024-07-13 07:08:57.716061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:49.725 [2024-07-13 07:08:57.716131] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104879 ] 00:22:49.988 [2024-07-13 07:08:57.852347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:49.988 [2024-07-13 07:08:57.852428] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:49.988 [2024-07-13 07:08:57.852435] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:49.988 [2024-07-13 07:08:57.852451] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:49.988 [2024-07-13 07:08:57.852461] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:49.988 [2024-07-13 07:08:57.852624] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:49.988 [2024-07-13 07:08:57.852677] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e526e0 0 00:22:49.988 [2024-07-13 07:08:57.866575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:49.988 [2024-07-13 07:08:57.866598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:49.988 [2024-07-13 07:08:57.866616] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:49.988 [2024-07-13 07:08:57.866621] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:49.988 [2024-07-13 07:08:57.866681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.866688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.866692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.988 [2024-07-13 07:08:57.866707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:49.988 [2024-07-13 07:08:57.866740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.988 [2024-07-13 07:08:57.874575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.988 [2024-07-13 07:08:57.874596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.988 [2024-07-13 07:08:57.874602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.988 [2024-07-13 07:08:57.874628] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:49.988 [2024-07-13 07:08:57.874637] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:49.988 [2024-07-13 07:08:57.874643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:49.988 [2024-07-13 07:08:57.874671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.988 [2024-07-13 07:08:57.874690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-13 07:08:57.874721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.988 [2024-07-13 07:08:57.874814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.988 [2024-07-13 07:08:57.874821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.988 [2024-07-13 07:08:57.874825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.988 [2024-07-13 07:08:57.874835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:49.988 [2024-07-13 07:08:57.874843] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:49.988 [2024-07-13 07:08:57.874851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.988 [2024-07-13 07:08:57.874866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-13 07:08:57.874887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.988 [2024-07-13 07:08:57.874982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.988 [2024-07-13 07:08:57.874988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.988 [2024-07-13 07:08:57.874992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.874996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.988 [2024-07-13 07:08:57.875003] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:49.988 [2024-07-13 07:08:57.875011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:49.988 [2024-07-13 07:08:57.875018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.875023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.875026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.988 [2024-07-13 07:08:57.875034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-13 07:08:57.875053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.988 [2024-07-13 07:08:57.875126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.988 [2024-07-13 07:08:57.875133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.988 [2024-07-13 07:08:57.875136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.875141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.988 [2024-07-13 07:08:57.875146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:49.988 [2024-07-13 07:08:57.875156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.875161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.988 [2024-07-13 07:08:57.875165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.988 [2024-07-13 07:08:57.875172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.988 [2024-07-13 07:08:57.875190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.988 [2024-07-13 07:08:57.875260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.989 [2024-07-13 07:08:57.875266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.989 [2024-07-13 07:08:57.875270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.989 [2024-07-13 07:08:57.875280] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:49.989 [2024-07-13 07:08:57.875285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:49.989 [2024-07-13 07:08:57.875293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:49.989 [2024-07-13 07:08:57.875399] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:49.989 [2024-07-13 07:08:57.875411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:49.989 [2024-07-13 07:08:57.875422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.875438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-13 07:08:57.875459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.989 [2024-07-13 07:08:57.875525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.989 [2024-07-13 07:08:57.875537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.989 [2024-07-13 07:08:57.875541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.989 [2024-07-13 07:08:57.875562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:49.989 [2024-07-13 07:08:57.875575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.875592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-13 07:08:57.875613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.989 [2024-07-13 07:08:57.875695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.989 [2024-07-13 07:08:57.875702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.989 [2024-07-13 07:08:57.875705] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.989 [2024-07-13 07:08:57.875715] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:49.989 [2024-07-13 07:08:57.875720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:49.989 [2024-07-13 07:08:57.875728] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:49.989 [2024-07-13 07:08:57.875741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:49.989 [2024-07-13 07:08:57.875751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.875764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.989 [2024-07-13 07:08:57.875784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.989 [2024-07-13 07:08:57.875898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.989 [2024-07-13 07:08:57.875905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.989 [2024-07-13 07:08:57.875909] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875913] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=4096, cccid=0 00:22:49.989 [2024-07-13 07:08:57.875919] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9dec0) on tqpair(0x1e526e0): expected_datao=0, payload_size=4096 00:22:49.989 [2024-07-13 07:08:57.875923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875931] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875936] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.989 [2024-07-13 07:08:57.875950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.989 [2024-07-13 07:08:57.875954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.875958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.989 [2024-07-13 07:08:57.875966] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:49.989 [2024-07-13 07:08:57.875972] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:49.989 [2024-07-13 07:08:57.875976] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:49.989 [2024-07-13 07:08:57.875982] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:49.989 [2024-07-13 07:08:57.875987] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:49.989 [2024-07-13 07:08:57.875992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:49.989 [2024-07-13 07:08:57.876000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:49.989 [2024-07-13 07:08:57.876008] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.876024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.989 [2024-07-13 07:08:57.876044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.989 [2024-07-13 07:08:57.876124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.989 [2024-07-13 07:08:57.876130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.989 [2024-07-13 07:08:57.876134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.989 [2024-07-13 07:08:57.876146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.876161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.989 [2024-07-13 07:08:57.876167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.876183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.989 [2024-07-13 07:08:57.876189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.876203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.989 [2024-07-13 07:08:57.876209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.989 [2024-07-13 07:08:57.876217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.989 [2024-07-13 07:08:57.876223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.989 [2024-07-13 07:08:57.876229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.990 [2024-07-13 07:08:57.876262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.990 [2024-07-13 07:08:57.876284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9dec0, cid 0, qid 0 00:22:49.990 [2024-07-13 07:08:57.876290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e040, cid 1, qid 0 00:22:49.990 [2024-07-13 07:08:57.876295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e1c0, cid 2, qid 0 00:22:49.990 [2024-07-13 07:08:57.876300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.990 [2024-07-13 07:08:57.876305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.990 [2024-07-13 07:08:57.876419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.990 [2024-07-13 07:08:57.876426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.990 [2024-07-13 07:08:57.876429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.990 [2024-07-13 07:08:57.876439] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:49.990 [2024-07-13 07:08:57.876449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.990 [2024-07-13 07:08:57.876487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.990 [2024-07-13 07:08:57.876507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.990 [2024-07-13 07:08:57.876590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.990 [2024-07-13 07:08:57.876599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.990 [2024-07-13 07:08:57.876602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.990 [2024-07-13 07:08:57.876662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.990 [2024-07-13 07:08:57.876693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.990 [2024-07-13 07:08:57.876716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.990 [2024-07-13 07:08:57.876796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.990 [2024-07-13 07:08:57.876803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.990 [2024-07-13 07:08:57.876807] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876811] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=4096, cccid=4 00:22:49.990 [2024-07-13 07:08:57.876816] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e4c0) on tqpair(0x1e526e0): expected_datao=0, payload_size=4096 00:22:49.990 [2024-07-13 07:08:57.876820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876827] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876831] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.990 [2024-07-13 07:08:57.876845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.990 [2024-07-13 07:08:57.876849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.990 [2024-07-13 07:08:57.876868] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:49.990 [2024-07-13 07:08:57.876880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.876898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.876903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.990 [2024-07-13 07:08:57.876910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.990 [2024-07-13 07:08:57.876931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.990 [2024-07-13 07:08:57.877026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.990 [2024-07-13 07:08:57.877033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.990 [2024-07-13 07:08:57.877036] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877040] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=4096, cccid=4 00:22:49.990 [2024-07-13 07:08:57.877045] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e4c0) on tqpair(0x1e526e0): expected_datao=0, payload_size=4096 00:22:49.990 [2024-07-13 07:08:57.877049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877056] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877060] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.990 [2024-07-13 07:08:57.877074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.990 [2024-07-13 07:08:57.877078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.990 [2024-07-13 07:08:57.877098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.877109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:49.990 [2024-07-13 07:08:57.877118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.990 [2024-07-13 07:08:57.877129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.990 [2024-07-13 07:08:57.877149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.990 [2024-07-13 07:08:57.877232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.990 [2024-07-13 07:08:57.877239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.990 [2024-07-13 07:08:57.877242] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877246] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=4096, cccid=4 00:22:49.990 [2024-07-13 07:08:57.877251] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e4c0) on tqpair(0x1e526e0): expected_datao=0, payload_size=4096 00:22:49.990 [2024-07-13 07:08:57.877255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877262] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877266] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.990 [2024-07-13 07:08:57.877280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.990 [2024-07-13 07:08:57.877284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.990 [2024-07-13 07:08:57.877288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.991 [2024-07-13 07:08:57.877297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877344] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:49.991 [2024-07-13 07:08:57.877349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:49.991 [2024-07-13 07:08:57.877354] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:49.991 [2024-07-13 07:08:57.877393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.877411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.877419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.877433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.991 [2024-07-13 07:08:57.877466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.991 [2024-07-13 07:08:57.877474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e640, cid 5, qid 0 00:22:49.991 [2024-07-13 07:08:57.877614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.991 [2024-07-13 07:08:57.877623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.991 [2024-07-13 07:08:57.877627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.991 [2024-07-13 07:08:57.877639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.991 [2024-07-13 07:08:57.877645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.991 [2024-07-13 07:08:57.877648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e640) on tqpair=0x1e526e0 00:22:49.991 [2024-07-13 07:08:57.877663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.877676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.877697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e640, cid 5, qid 0 00:22:49.991 [2024-07-13 07:08:57.877769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.991 [2024-07-13 07:08:57.877775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.991 [2024-07-13 07:08:57.877779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e640) on tqpair=0x1e526e0 00:22:49.991 [2024-07-13 07:08:57.877794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.877806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.877825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e640, cid 5, qid 0 00:22:49.991 [2024-07-13 07:08:57.877903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.991 [2024-07-13 07:08:57.877910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.991 [2024-07-13 07:08:57.877914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e640) on tqpair=0x1e526e0 00:22:49.991 [2024-07-13 07:08:57.877929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.877934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.877941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.877976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e640, cid 5, qid 0 00:22:49.991 [2024-07-13 07:08:57.878042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.991 [2024-07-13 07:08:57.878049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.991 [2024-07-13 07:08:57.878053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e640) on tqpair=0x1e526e0 00:22:49.991 [2024-07-13 07:08:57.878077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.878090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.878097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.878108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.878115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.878126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.878138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e526e0) 00:22:49.991 [2024-07-13 07:08:57.878149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.991 [2024-07-13 07:08:57.878170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e640, cid 5, qid 0 00:22:49.991 [2024-07-13 07:08:57.878177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e4c0, cid 4, qid 0 00:22:49.991 [2024-07-13 07:08:57.878182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e7c0, cid 6, qid 0 00:22:49.991 [2024-07-13 07:08:57.878187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e940, cid 7, qid 0 00:22:49.991 [2024-07-13 07:08:57.878393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.991 [2024-07-13 07:08:57.878408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.991 [2024-07-13 07:08:57.878413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878417] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=8192, cccid=5 00:22:49.991 [2024-07-13 07:08:57.878422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e640) on tqpair(0x1e526e0): expected_datao=0, payload_size=8192 00:22:49.991 [2024-07-13 07:08:57.878427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878444] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878450] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.991 [2024-07-13 07:08:57.878456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.991 [2024-07-13 07:08:57.878462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.992 [2024-07-13 07:08:57.878465] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878469] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=512, cccid=4 00:22:49.992 [2024-07-13 07:08:57.878474] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e4c0) on tqpair(0x1e526e0): expected_datao=0, payload_size=512 00:22:49.992 [2024-07-13 07:08:57.878478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878484] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878488] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.992 [2024-07-13 07:08:57.878499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.992 [2024-07-13 07:08:57.878503] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878506] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=512, cccid=6 00:22:49.992 [2024-07-13 07:08:57.878511] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e7c0) on tqpair(0x1e526e0): expected_datao=0, payload_size=512 00:22:49.992 [2024-07-13 07:08:57.878515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878520] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878524] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.992 [2024-07-13 07:08:57.878535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.992 [2024-07-13 07:08:57.878538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.878542] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e526e0): datao=0, datal=4096, cccid=7 00:22:49.992 [2024-07-13 07:08:57.878546] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e9e940) on tqpair(0x1e526e0): expected_datao=0, payload_size=4096 00:22:49.992 [2024-07-13 07:08:57.882579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.882591] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.882596] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.882606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.992 [2024-07-13 07:08:57.882612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.992 [2024-07-13 07:08:57.882616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.882620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e640) on tqpair=0x1e526e0 00:22:49.992 ===================================================== 00:22:49.992 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.992 ===================================================== 00:22:49.992 Controller Capabilities/Features 00:22:49.992 ================================ 00:22:49.992 Vendor ID: 8086 00:22:49.992 Subsystem Vendor ID: 8086 00:22:49.992 Serial Number: SPDK00000000000001 00:22:49.992 Model Number: SPDK bdev Controller 00:22:49.992 Firmware Version: 24.09 00:22:49.992 Recommended Arb Burst: 6 00:22:49.992 IEEE OUI Identifier: e4 d2 5c 00:22:49.992 Multi-path I/O 00:22:49.992 May have multiple subsystem ports: Yes 00:22:49.992 May have multiple controllers: Yes 00:22:49.992 Associated with SR-IOV VF: No 00:22:49.992 Max Data Transfer Size: 131072 00:22:49.992 Max Number of Namespaces: 32 00:22:49.992 Max Number of I/O Queues: 127 00:22:49.992 NVMe Specification Version (VS): 1.3 00:22:49.992 NVMe Specification Version (Identify): 1.3 00:22:49.992 Maximum Queue Entries: 128 00:22:49.992 Contiguous Queues Required: Yes 00:22:49.992 Arbitration Mechanisms Supported 00:22:49.992 Weighted Round Robin: Not Supported 00:22:49.992 Vendor Specific: Not Supported 00:22:49.992 Reset Timeout: 15000 ms 00:22:49.992 Doorbell Stride: 4 bytes 00:22:49.992 NVM Subsystem Reset: Not Supported 00:22:49.992 Command Sets Supported 00:22:49.992 NVM Command Set: Supported 00:22:49.992 Boot Partition: Not Supported 00:22:49.992 Memory Page Size Minimum: 4096 bytes 00:22:49.992 Memory Page Size Maximum: 4096 bytes 00:22:49.992 Persistent Memory Region: Not Supported 00:22:49.992 Optional Asynchronous Events Supported 00:22:49.992 Namespace Attribute Notices: Supported 00:22:49.992 Firmware Activation Notices: Not Supported 00:22:49.992 ANA Change Notices: Not Supported 00:22:49.992 PLE Aggregate Log Change Notices: Not Supported 00:22:49.992 LBA Status Info Alert Notices: Not Supported 00:22:49.992 EGE Aggregate Log Change Notices: Not Supported 00:22:49.992 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.992 Zone Descriptor Change Notices: Not Supported 00:22:49.992 Discovery Log Change Notices: Not Supported 00:22:49.992 Controller Attributes 00:22:49.992 128-bit Host Identifier: Supported 00:22:49.992 Non-Operational Permissive Mode: Not Supported 00:22:49.992 NVM Sets: Not Supported 00:22:49.992 Read Recovery Levels: Not Supported 00:22:49.992 Endurance Groups: Not Supported 00:22:49.992 Predictable Latency Mode: Not Supported 00:22:49.992 Traffic Based Keep ALive: Not Supported 00:22:49.992 Namespace Granularity: Not Supported 00:22:49.992 SQ Associations: Not Supported 00:22:49.992 UUID List: Not Supported 00:22:49.992 Multi-Domain Subsystem: Not Supported 00:22:49.992 Fixed Capacity Management: Not Supported 00:22:49.992 Variable Capacity Management: Not Supported 00:22:49.992 Delete Endurance Group: Not Supported 00:22:49.992 Delete NVM Set: Not Supported 00:22:49.992 Extended LBA Formats Supported: Not Supported 00:22:49.992 Flexible Data Placement Supported: Not Supported 00:22:49.992 00:22:49.992 Controller Memory Buffer Support 00:22:49.992 ================================ 00:22:49.992 Supported: No 00:22:49.992 00:22:49.992 Persistent Memory Region Support 00:22:49.992 ================================ 00:22:49.992 Supported: No 00:22:49.992 00:22:49.992 Admin Command Set Attributes 00:22:49.992 ============================ 00:22:49.992 Security Send/Receive: Not Supported 00:22:49.992 Format NVM: Not Supported 00:22:49.992 Firmware Activate/Download: Not Supported 00:22:49.992 Namespace Management: Not Supported 00:22:49.992 Device Self-Test: Not Supported 00:22:49.992 Directives: Not Supported 00:22:49.992 NVMe-MI: Not Supported 00:22:49.992 Virtualization Management: Not Supported 00:22:49.992 Doorbell Buffer Config: Not Supported 00:22:49.992 Get LBA Status Capability: Not Supported 00:22:49.992 Command & Feature Lockdown Capability: Not Supported 00:22:49.992 Abort Command Limit: 4 00:22:49.992 Async Event Request Limit: 4 00:22:49.992 Number of Firmware Slots: N/A 00:22:49.992 Firmware Slot 1 Read-Only: N/A 00:22:49.992 Firmware Activation Without Reset: [2024-07-13 07:08:57.882640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.992 [2024-07-13 07:08:57.882655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.992 [2024-07-13 07:08:57.882659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.882663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e4c0) on tqpair=0x1e526e0 00:22:49.992 [2024-07-13 07:08:57.882676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.992 [2024-07-13 07:08:57.882682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.992 [2024-07-13 07:08:57.882686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.992 [2024-07-13 07:08:57.882690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e7c0) on tqpair=0x1e526e0 00:22:49.992 [2024-07-13 07:08:57.882697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.992 [2024-07-13 07:08:57.882702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.992 [2024-07-13 07:08:57.882706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.993 [2024-07-13 07:08:57.882710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e940) on tqpair=0x1e526e0 00:22:49.993 N/A 00:22:49.993 Multiple Update Detection Support: N/A 00:22:49.993 Firmware Update Granularity: No Information Provided 00:22:49.993 Per-Namespace SMART Log: No 00:22:49.993 Asymmetric Namespace Access Log Page: Not Supported 00:22:49.993 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:49.993 Command Effects Log Page: Supported 00:22:49.993 Get Log Page Extended Data: Supported 00:22:49.993 Telemetry Log Pages: Not Supported 00:22:49.993 Persistent Event Log Pages: Not Supported 00:22:49.993 Supported Log Pages Log Page: May Support 00:22:49.993 Commands Supported & Effects Log Page: Not Supported 00:22:49.993 Feature Identifiers & Effects Log Page:May Support 00:22:49.993 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.993 Data Area 4 for Telemetry Log: Not Supported 00:22:49.993 Error Log Page Entries Supported: 128 00:22:49.993 Keep Alive: Supported 00:22:49.993 Keep Alive Granularity: 10000 ms 00:22:49.993 00:22:49.993 NVM Command Set Attributes 00:22:49.993 ========================== 00:22:49.993 Submission Queue Entry Size 00:22:49.993 Max: 64 00:22:49.993 Min: 64 00:22:49.993 Completion Queue Entry Size 00:22:49.993 Max: 16 00:22:49.993 Min: 16 00:22:49.993 Number of Namespaces: 32 00:22:49.993 Compare Command: Supported 00:22:49.993 Write Uncorrectable Command: Not Supported 00:22:49.993 Dataset Management Command: Supported 00:22:49.993 Write Zeroes Command: Supported 00:22:49.993 Set Features Save Field: Not Supported 00:22:49.993 Reservations: Supported 00:22:49.993 Timestamp: Not Supported 00:22:49.993 Copy: Supported 00:22:49.993 Volatile Write Cache: Present 00:22:49.993 Atomic Write Unit (Normal): 1 00:22:49.993 Atomic Write Unit (PFail): 1 00:22:49.993 Atomic Compare & Write Unit: 1 00:22:49.993 Fused Compare & Write: Supported 00:22:49.993 Scatter-Gather List 00:22:49.993 SGL Command Set: Supported 00:22:49.993 SGL Keyed: Supported 00:22:49.993 SGL Bit Bucket Descriptor: Not Supported 00:22:49.993 SGL Metadata Pointer: Not Supported 00:22:49.993 Oversized SGL: Not Supported 00:22:49.993 SGL Metadata Address: Not Supported 00:22:49.993 SGL Offset: Supported 00:22:49.993 Transport SGL Data Block: Not Supported 00:22:49.993 Replay Protected Memory Block: Not Supported 00:22:49.993 00:22:49.993 Firmware Slot Information 00:22:49.993 ========================= 00:22:49.993 Active slot: 1 00:22:49.993 Slot 1 Firmware Revision: 24.09 00:22:49.993 00:22:49.993 00:22:49.993 Commands Supported and Effects 00:22:49.993 ============================== 00:22:49.993 Admin Commands 00:22:49.993 -------------- 00:22:49.993 Get Log Page (02h): Supported 00:22:49.993 Identify (06h): Supported 00:22:49.993 Abort (08h): Supported 00:22:49.993 Set Features (09h): Supported 00:22:49.993 Get Features (0Ah): Supported 00:22:49.993 Asynchronous Event Request (0Ch): Supported 00:22:49.993 Keep Alive (18h): Supported 00:22:49.993 I/O Commands 00:22:49.993 ------------ 00:22:49.993 Flush (00h): Supported LBA-Change 00:22:49.993 Write (01h): Supported LBA-Change 00:22:49.993 Read (02h): Supported 00:22:49.993 Compare (05h): Supported 00:22:49.993 Write Zeroes (08h): Supported LBA-Change 00:22:49.993 Dataset Management (09h): Supported LBA-Change 00:22:49.993 Copy (19h): Supported LBA-Change 00:22:49.993 00:22:49.993 Error Log 00:22:49.993 ========= 00:22:49.993 00:22:49.993 Arbitration 00:22:49.993 =========== 00:22:49.993 Arbitration Burst: 1 00:22:49.993 00:22:49.993 Power Management 00:22:49.993 ================ 00:22:49.993 Number of Power States: 1 00:22:49.993 Current Power State: Power State #0 00:22:49.993 Power State #0: 00:22:49.993 Max Power: 0.00 W 00:22:49.993 Non-Operational State: Operational 00:22:49.993 Entry Latency: Not Reported 00:22:49.993 Exit Latency: Not Reported 00:22:49.993 Relative Read Throughput: 0 00:22:49.993 Relative Read Latency: 0 00:22:49.993 Relative Write Throughput: 0 00:22:49.993 Relative Write Latency: 0 00:22:49.993 Idle Power: Not Reported 00:22:49.993 Active Power: Not Reported 00:22:49.993 Non-Operational Permissive Mode: Not Supported 00:22:49.993 00:22:49.993 Health Information 00:22:49.993 ================== 00:22:49.993 Critical Warnings: 00:22:49.993 Available Spare Space: OK 00:22:49.993 Temperature: OK 00:22:49.993 Device Reliability: OK 00:22:49.993 Read Only: No 00:22:49.993 Volatile Memory Backup: OK 00:22:49.993 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:49.993 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:49.993 Available Spare: 0% 00:22:49.993 Available Spare Threshold: 0% 00:22:49.993 Life Percentage Used:[2024-07-13 07:08:57.882831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.993 [2024-07-13 07:08:57.882839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e526e0) 00:22:49.993 [2024-07-13 07:08:57.882847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.993 [2024-07-13 07:08:57.882875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e940, cid 7, qid 0 00:22:49.993 [2024-07-13 07:08:57.882960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.993 [2024-07-13 07:08:57.882967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.993 [2024-07-13 07:08:57.882971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.993 [2024-07-13 07:08:57.882975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e940) on tqpair=0x1e526e0 00:22:49.993 [2024-07-13 07:08:57.883029] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:49.993 [2024-07-13 07:08:57.883043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9dec0) on tqpair=0x1e526e0 00:22:49.993 [2024-07-13 07:08:57.883050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.993 [2024-07-13 07:08:57.883056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e040) on tqpair=0x1e526e0 00:22:49.993 [2024-07-13 07:08:57.883060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.993 [2024-07-13 07:08:57.883066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e1c0) on tqpair=0x1e526e0 00:22:49.993 [2024-07-13 07:08:57.883071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.993 [2024-07-13 07:08:57.883076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.993 [2024-07-13 07:08:57.883080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.994 [2024-07-13 07:08:57.883089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.883192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.883199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.883202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.883216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.883354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.883367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.883371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.883382] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:49.994 [2024-07-13 07:08:57.883388] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:49.994 [2024-07-13 07:08:57.883398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.883502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.883508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.883512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.883527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.883652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.883659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.883663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.883677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.883784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.883790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.883795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.883809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.883910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.883917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.883921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.883935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.883943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.883950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.883968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.994 [2024-07-13 07:08:57.884033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.994 [2024-07-13 07:08:57.884039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.994 [2024-07-13 07:08:57.884043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.884047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.994 [2024-07-13 07:08:57.884057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.884061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.994 [2024-07-13 07:08:57.884065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.994 [2024-07-13 07:08:57.884072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.994 [2024-07-13 07:08:57.884090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.884210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.884230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.884345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.884363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.884474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.884492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.884628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.884650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.884752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.884772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.884879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.884897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.884961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.884968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.884971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.884986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.884994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.885001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.885019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.885101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.885107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.885111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.885125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.885140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.885159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.885232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.885239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.885242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.885256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.995 [2024-07-13 07:08:57.885272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.995 [2024-07-13 07:08:57.885290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.995 [2024-07-13 07:08:57.885364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.995 [2024-07-13 07:08:57.885370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.995 [2024-07-13 07:08:57.885374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.995 [2024-07-13 07:08:57.885388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.995 [2024-07-13 07:08:57.885396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.885403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.885421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.885484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.885501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.885505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.885519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.885534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.885552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.885642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.885651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.885655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.885670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.885686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.885708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.885805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.885812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.885816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.885832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.885848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.885868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.885958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.885965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.885969] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.885983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.885992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.885999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.886017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.886115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.886121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.886125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.886140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.886157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.886175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.886302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.886314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.886319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.886334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.886351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.886372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.886467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.886474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.886478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.886492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.886501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.886509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.886528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.890566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.890583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.890588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.890593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.890606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.890612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.890616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e526e0) 00:22:49.996 [2024-07-13 07:08:57.890624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.996 [2024-07-13 07:08:57.890649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e9e340, cid 3, qid 0 00:22:49.996 [2024-07-13 07:08:57.890738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.996 [2024-07-13 07:08:57.890745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.996 [2024-07-13 07:08:57.890749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.996 [2024-07-13 07:08:57.890753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e9e340) on tqpair=0x1e526e0 00:22:49.996 [2024-07-13 07:08:57.890761] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:22:49.996 0% 00:22:49.996 Data Units Read: 0 00:22:49.996 Data Units Written: 0 00:22:49.996 Host Read Commands: 0 00:22:49.996 Host Write Commands: 0 00:22:49.996 Controller Busy Time: 0 minutes 00:22:49.996 Power Cycles: 0 00:22:49.996 Power On Hours: 0 hours 00:22:49.996 Unsafe Shutdowns: 0 00:22:49.996 Unrecoverable Media Errors: 0 00:22:49.996 Lifetime Error Log Entries: 0 00:22:49.996 Warning Temperature Time: 0 minutes 00:22:49.996 Critical Temperature Time: 0 minutes 00:22:49.996 00:22:49.997 Number of Queues 00:22:49.997 ================ 00:22:49.997 Number of I/O Submission Queues: 127 00:22:49.997 Number of I/O Completion Queues: 127 00:22:49.997 00:22:49.997 Active Namespaces 00:22:49.997 ================= 00:22:49.997 Namespace ID:1 00:22:49.997 Error Recovery Timeout: Unlimited 00:22:49.997 Command Set Identifier: NVM (00h) 00:22:49.997 Deallocate: Supported 00:22:49.997 Deallocated/Unwritten Error: Not Supported 00:22:49.997 Deallocated Read Value: Unknown 00:22:49.997 Deallocate in Write Zeroes: Not Supported 00:22:49.997 Deallocated Guard Field: 0xFFFF 00:22:49.997 Flush: Supported 00:22:49.997 Reservation: Supported 00:22:49.997 Namespace Sharing Capabilities: Multiple Controllers 00:22:49.997 Size (in LBAs): 131072 (0GiB) 00:22:49.997 Capacity (in LBAs): 131072 (0GiB) 00:22:49.997 Utilization (in LBAs): 131072 (0GiB) 00:22:49.997 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:49.997 EUI64: ABCDEF0123456789 00:22:49.997 UUID: 09bea1c7-2b1f-44b0-8d21-cd97cd22ed26 00:22:49.997 Thin Provisioning: Not Supported 00:22:49.997 Per-NS Atomic Units: Yes 00:22:49.997 Atomic Boundary Size (Normal): 0 00:22:49.997 Atomic Boundary Size (PFail): 0 00:22:49.997 Atomic Boundary Offset: 0 00:22:49.997 Maximum Single Source Range Length: 65535 00:22:49.997 Maximum Copy Length: 65535 00:22:49.997 Maximum Source Range Count: 1 00:22:49.997 NGUID/EUI64 Never Reused: No 00:22:49.997 Namespace Write Protected: No 00:22:49.997 Number of LBA Formats: 1 00:22:49.997 Current LBA Format: LBA Format #00 00:22:49.997 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:49.997 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.997 07:08:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.997 rmmod nvme_tcp 00:22:49.997 rmmod nvme_fabrics 00:22:49.997 rmmod nvme_keyring 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 104824 ']' 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 104824 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 104824 ']' 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 104824 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 104824 00:22:49.997 killing process with pid 104824 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 104824' 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 104824 00:22:49.997 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 104824 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:50.563 00:22:50.563 real 0m2.744s 00:22:50.563 user 0m7.745s 00:22:50.563 sys 0m0.687s 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:50.563 07:08:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.563 ************************************ 00:22:50.563 END TEST nvmf_identify 00:22:50.563 ************************************ 00:22:50.563 07:08:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:50.563 07:08:58 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:50.563 07:08:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.563 07:08:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.563 07:08:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.563 ************************************ 00:22:50.563 START TEST nvmf_perf 00:22:50.563 ************************************ 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:50.563 * Looking for test storage... 00:22:50.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.563 07:08:58 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:50.564 Cannot find device "nvmf_tgt_br" 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:22:50.564 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:50.822 Cannot find device "nvmf_tgt_br2" 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:50.822 Cannot find device "nvmf_tgt_br" 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:50.822 Cannot find device "nvmf_tgt_br2" 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:50.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:50.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:50.822 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:51.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:22:51.080 00:22:51.080 --- 10.0.0.2 ping statistics --- 00:22:51.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.080 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:51.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:22:51.080 00:22:51.080 --- 10.0.0.3 ping statistics --- 00:22:51.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.080 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:51.080 00:22:51.080 --- 10.0.0.1 ping statistics --- 00:22:51.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.080 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=105048 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 105048 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 105048 ']' 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.080 07:08:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.080 [2024-07-13 07:08:58.996594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:51.080 [2024-07-13 07:08:58.996978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.080 [2024-07-13 07:08:59.135282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.339 [2024-07-13 07:08:59.241033] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.339 [2024-07-13 07:08:59.241425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.339 [2024-07-13 07:08:59.241624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.339 [2024-07-13 07:08:59.241752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.339 [2024-07-13 07:08:59.241792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.339 [2024-07-13 07:08:59.241991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.339 [2024-07-13 07:08:59.242146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.339 [2024-07-13 07:08:59.242259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.339 [2024-07-13 07:08:59.242309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:52.276 07:09:00 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:52.535 07:09:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:52.535 07:09:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:52.793 07:09:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:52.793 07:09:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:53.052 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:53.052 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:53.052 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:53.052 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:53.052 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:53.309 [2024-07-13 07:09:01.223882] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.309 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.567 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:53.567 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.825 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:53.825 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:54.086 07:09:01 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.347 [2024-07-13 07:09:02.201849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.347 07:09:02 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:54.607 07:09:02 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:54.607 07:09:02 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:54.607 07:09:02 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:54.607 07:09:02 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:55.542 Initializing NVMe Controllers 00:22:55.542 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:55.542 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:55.542 Initialization complete. Launching workers. 00:22:55.542 ======================================================== 00:22:55.542 Latency(us) 00:22:55.542 Device Information : IOPS MiB/s Average min max 00:22:55.542 PCIE (0000:00:10.0) NSID 1 from core 0: 23281.10 90.94 1374.70 392.24 7873.15 00:22:55.542 ======================================================== 00:22:55.542 Total : 23281.10 90.94 1374.70 392.24 7873.15 00:22:55.542 00:22:55.542 07:09:03 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:56.922 Initializing NVMe Controllers 00:22:56.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:56.922 Initialization complete. Launching workers. 00:22:56.922 ======================================================== 00:22:56.922 Latency(us) 00:22:56.922 Device Information : IOPS MiB/s Average min max 00:22:56.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3084.88 12.05 323.88 117.79 7176.92 00:22:56.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.88 0.48 8202.69 6980.35 12038.20 00:22:56.922 ======================================================== 00:22:56.922 Total : 3207.76 12.53 625.68 117.79 12038.20 00:22:56.922 00:22:56.922 07:09:04 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:58.299 Initializing NVMe Controllers 00:22:58.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:58.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:58.300 Initialization complete. Launching workers. 00:22:58.300 ======================================================== 00:22:58.300 Latency(us) 00:22:58.300 Device Information : IOPS MiB/s Average min max 00:22:58.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9031.37 35.28 3543.58 751.18 7491.29 00:22:58.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2686.21 10.49 12029.89 7241.18 20229.27 00:22:58.300 ======================================================== 00:22:58.300 Total : 11717.58 45.77 5489.03 751.18 20229.27 00:22:58.300 00:22:58.300 07:09:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:58.300 07:09:06 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.835 Initializing NVMe Controllers 00:23:00.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.835 Controller IO queue size 128, less than required. 00:23:00.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.835 Controller IO queue size 128, less than required. 00:23:00.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.835 Initialization complete. Launching workers. 00:23:00.835 ======================================================== 00:23:00.835 Latency(us) 00:23:00.835 Device Information : IOPS MiB/s Average min max 00:23:00.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1508.20 377.05 86482.06 53448.76 158764.27 00:23:00.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.47 144.37 241442.82 130740.18 411640.31 00:23:00.835 ======================================================== 00:23:00.835 Total : 2085.67 521.42 129386.87 53448.76 411640.31 00:23:00.835 00:23:00.835 07:09:08 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:01.094 Initializing NVMe Controllers 00:23:01.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.094 Controller IO queue size 128, less than required. 00:23:01.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.094 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:01.094 Controller IO queue size 128, less than required. 00:23:01.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:01.094 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:01.094 WARNING: Some requested NVMe devices were skipped 00:23:01.094 No valid NVMe controllers or AIO or URING devices found 00:23:01.094 07:09:09 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:03.626 Initializing NVMe Controllers 00:23:03.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.626 Controller IO queue size 128, less than required. 00:23:03.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.626 Controller IO queue size 128, less than required. 00:23:03.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:03.626 Initialization complete. Launching workers. 00:23:03.626 00:23:03.626 ==================== 00:23:03.626 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:03.626 TCP transport: 00:23:03.626 polls: 7683 00:23:03.626 idle_polls: 4928 00:23:03.626 sock_completions: 2755 00:23:03.626 nvme_completions: 5461 00:23:03.626 submitted_requests: 8184 00:23:03.626 queued_requests: 1 00:23:03.626 00:23:03.626 ==================== 00:23:03.626 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:03.626 TCP transport: 00:23:03.626 polls: 10537 00:23:03.626 idle_polls: 7761 00:23:03.626 sock_completions: 2776 00:23:03.626 nvme_completions: 5663 00:23:03.626 submitted_requests: 8500 00:23:03.626 queued_requests: 1 00:23:03.626 ======================================================== 00:23:03.626 Latency(us) 00:23:03.626 Device Information : IOPS MiB/s Average min max 00:23:03.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1363.53 340.88 96757.10 72617.22 161794.28 00:23:03.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1413.98 353.49 91039.49 47855.87 122631.70 00:23:03.626 ======================================================== 00:23:03.626 Total : 2777.51 694.38 93846.37 47855.87 161794.28 00:23:03.626 00:23:03.626 07:09:11 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:03.883 07:09:11 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.141 07:09:11 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:04.141 07:09:11 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:04.141 07:09:11 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:04.398 07:09:12 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=1aa936b6-48f5-4587-86b9-e35116724e2e 00:23:04.398 07:09:12 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 1aa936b6-48f5-4587-86b9-e35116724e2e 00:23:04.399 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1aa936b6-48f5-4587-86b9-e35116724e2e 00:23:04.399 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:04.399 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:23:04.399 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:23:04.399 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:04.656 { 00:23:04.656 "base_bdev": "Nvme0n1", 00:23:04.656 "block_size": 4096, 00:23:04.656 "cluster_size": 4194304, 00:23:04.656 "free_clusters": 1278, 00:23:04.656 "name": "lvs_0", 00:23:04.656 "total_data_clusters": 1278, 00:23:04.656 "uuid": "1aa936b6-48f5-4587-86b9-e35116724e2e" 00:23:04.656 } 00:23:04.656 ]' 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1aa936b6-48f5-4587-86b9-e35116724e2e") .free_clusters' 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1aa936b6-48f5-4587-86b9-e35116724e2e") .cluster_size' 00:23:04.656 5112 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:04.656 07:09:12 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1aa936b6-48f5-4587-86b9-e35116724e2e lbd_0 5112 00:23:04.914 07:09:12 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=fe121f07-e75a-439e-a6e3-a937f9791de6 00:23:04.914 07:09:12 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore fe121f07-e75a-439e-a6e3-a937f9791de6 lvs_n_0 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=b9e4f13b-1e2c-4466-a113-3260fc633ec7 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb b9e4f13b-1e2c-4466-a113-3260fc633ec7 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=b9e4f13b-1e2c-4466-a113-3260fc633ec7 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:23:05.171 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:05.429 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:05.429 { 00:23:05.429 "base_bdev": "Nvme0n1", 00:23:05.429 "block_size": 4096, 00:23:05.429 "cluster_size": 4194304, 00:23:05.429 "free_clusters": 0, 00:23:05.429 "name": "lvs_0", 00:23:05.429 "total_data_clusters": 1278, 00:23:05.429 "uuid": "1aa936b6-48f5-4587-86b9-e35116724e2e" 00:23:05.429 }, 00:23:05.429 { 00:23:05.429 "base_bdev": "fe121f07-e75a-439e-a6e3-a937f9791de6", 00:23:05.429 "block_size": 4096, 00:23:05.429 "cluster_size": 4194304, 00:23:05.429 "free_clusters": 1276, 00:23:05.429 "name": "lvs_n_0", 00:23:05.429 "total_data_clusters": 1276, 00:23:05.429 "uuid": "b9e4f13b-1e2c-4466-a113-3260fc633ec7" 00:23:05.429 } 00:23:05.429 ]' 00:23:05.429 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="b9e4f13b-1e2c-4466-a113-3260fc633ec7") .free_clusters' 00:23:05.429 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:23:05.429 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="b9e4f13b-1e2c-4466-a113-3260fc633ec7") .cluster_size' 00:23:05.688 5104 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b9e4f13b-1e2c-4466-a113-3260fc633ec7 lbd_nest_0 5104 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=1fa4b341-1b17-4d67-8411-c86862d9c074 00:23:05.688 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:05.947 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:05.947 07:09:13 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1fa4b341-1b17-4d67-8411-c86862d9c074 00:23:06.205 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.463 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:06.463 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:06.463 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:06.463 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:06.463 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.721 Initializing NVMe Controllers 00:23:06.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.721 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:06.721 WARNING: Some requested NVMe devices were skipped 00:23:06.721 No valid NVMe controllers or AIO or URING devices found 00:23:06.721 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:06.721 07:09:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:18.933 Initializing NVMe Controllers 00:23:18.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:18.933 Initialization complete. Launching workers. 00:23:18.933 ======================================================== 00:23:18.933 Latency(us) 00:23:18.933 Device Information : IOPS MiB/s Average min max 00:23:18.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 824.03 103.00 1213.18 393.49 8701.13 00:23:18.933 ======================================================== 00:23:18.933 Total : 824.03 103.00 1213.18 393.49 8701.13 00:23:18.933 00:23:18.933 07:09:25 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:18.933 07:09:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:18.933 07:09:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:18.933 Initializing NVMe Controllers 00:23:18.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.933 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:18.933 WARNING: Some requested NVMe devices were skipped 00:23:18.933 No valid NVMe controllers or AIO or URING devices found 00:23:18.933 07:09:25 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:18.933 07:09:25 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:28.905 Initializing NVMe Controllers 00:23:28.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:28.905 Initialization complete. Launching workers. 00:23:28.905 ======================================================== 00:23:28.905 Latency(us) 00:23:28.905 Device Information : IOPS MiB/s Average min max 00:23:28.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 931.80 116.47 34393.45 7448.10 275240.45 00:23:28.905 ======================================================== 00:23:28.905 Total : 931.80 116.47 34393.45 7448.10 275240.45 00:23:28.905 00:23:28.905 07:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:28.905 07:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:28.905 07:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:28.905 Initializing NVMe Controllers 00:23:28.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.905 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:28.906 WARNING: Some requested NVMe devices were skipped 00:23:28.906 No valid NVMe controllers or AIO or URING devices found 00:23:28.906 07:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:28.906 07:09:35 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:38.940 Initializing NVMe Controllers 00:23:38.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.940 Controller IO queue size 128, less than required. 00:23:38.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:38.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.940 Initialization complete. Launching workers. 00:23:38.940 ======================================================== 00:23:38.940 Latency(us) 00:23:38.940 Device Information : IOPS MiB/s Average min max 00:23:38.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3752.39 469.05 34142.63 4582.64 72229.64 00:23:38.940 ======================================================== 00:23:38.940 Total : 3752.39 469.05 34142.63 4582.64 72229.64 00:23:38.940 00:23:38.940 07:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.940 07:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1fa4b341-1b17-4d67-8411-c86862d9c074 00:23:38.940 07:09:46 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:39.197 07:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fe121f07-e75a-439e-a6e3-a937f9791de6 00:23:39.456 07:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.714 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.714 rmmod nvme_tcp 00:23:39.714 rmmod nvme_fabrics 00:23:39.714 rmmod nvme_keyring 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 105048 ']' 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 105048 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 105048 ']' 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 105048 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105048 00:23:39.972 killing process with pid 105048 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105048' 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 105048 00:23:39.972 07:09:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 105048 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:41.345 00:23:41.345 real 0m50.756s 00:23:41.345 user 3m9.265s 00:23:41.345 sys 0m11.349s 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.345 07:09:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.345 ************************************ 00:23:41.345 END TEST nvmf_perf 00:23:41.345 ************************************ 00:23:41.345 07:09:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:41.345 07:09:49 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:41.345 07:09:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:41.345 07:09:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.345 07:09:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.345 ************************************ 00:23:41.346 START TEST nvmf_fio_host 00:23:41.346 ************************************ 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:41.346 * Looking for test storage... 00:23:41.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:41.346 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.604 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:41.605 Cannot find device "nvmf_tgt_br" 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.605 Cannot find device "nvmf_tgt_br2" 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:41.605 Cannot find device "nvmf_tgt_br" 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:41.605 Cannot find device "nvmf_tgt_br2" 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:41.605 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:41.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:23:41.864 00:23:41.864 --- 10.0.0.2 ping statistics --- 00:23:41.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.864 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:41.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:41.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:23:41.864 00:23:41.864 --- 10.0.0.3 ping statistics --- 00:23:41.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.864 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:41.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:41.864 00:23:41.864 --- 10.0.0.1 ping statistics --- 00:23:41.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.864 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=106006 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 106006 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 106006 ']' 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.864 07:09:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.864 [2024-07-13 07:09:49.886880] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:41.864 [2024-07-13 07:09:49.886993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.123 [2024-07-13 07:09:50.026009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.123 [2024-07-13 07:09:50.137732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.123 [2024-07-13 07:09:50.138125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.123 [2024-07-13 07:09:50.138346] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.123 [2024-07-13 07:09:50.138415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.123 [2024-07-13 07:09:50.138542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.123 [2024-07-13 07:09:50.138748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.123 [2024-07-13 07:09:50.138919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.123 [2024-07-13 07:09:50.139671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.123 [2024-07-13 07:09:50.139680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.058 07:09:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.058 07:09:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:43.058 07:09:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:43.058 [2024-07-13 07:09:51.068557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.058 07:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:43.058 07:09:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.058 07:09:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.316 07:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:43.316 Malloc1 00:23:43.575 07:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.834 07:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:44.092 07:09:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.092 [2024-07-13 07:09:52.118360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.092 07:09:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:44.351 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:44.352 07:09:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:44.611 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:44.611 fio-3.35 00:23:44.611 Starting 1 thread 00:23:47.143 00:23:47.143 test: (groupid=0, jobs=1): err= 0: pid=106132: Sat Jul 13 07:09:54 2024 00:23:47.143 read: IOPS=9667, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2006msec) 00:23:47.143 slat (nsec): min=1795, max=377891, avg=2348.84, stdev=3587.18 00:23:47.143 clat (usec): min=3279, max=11228, avg=6912.39, stdev=488.23 00:23:47.143 lat (usec): min=3294, max=11230, avg=6914.74, stdev=487.96 00:23:47.143 clat percentiles (usec): 00:23:47.143 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6521], 00:23:47.143 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:23:47.143 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7701], 00:23:47.143 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10159], 99.95th=[10683], 00:23:47.143 | 99.99th=[11076] 00:23:47.143 bw ( KiB/s): min=37696, max=39264, per=99.96%, avg=38656.00, stdev=727.99, samples=4 00:23:47.143 iops : min= 9424, max= 9816, avg=9664.00, stdev=182.00, samples=4 00:23:47.143 write: IOPS=9674, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2006msec); 0 zone resets 00:23:47.143 slat (nsec): min=1865, max=288253, avg=2421.23, stdev=2586.56 00:23:47.143 clat (usec): min=2555, max=11125, avg=6273.01, stdev=432.72 00:23:47.143 lat (usec): min=2569, max=11127, avg=6275.43, stdev=432.62 00:23:47.143 clat percentiles (usec): 00:23:47.143 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5932], 00:23:47.143 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:23:47.143 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6718], 95.00th=[ 6915], 00:23:47.143 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[10028], 99.95th=[10421], 00:23:47.143 | 99.99th=[11076] 00:23:47.143 bw ( KiB/s): min=38264, max=39104, per=99.98%, avg=38690.00, stdev=343.56, samples=4 00:23:47.143 iops : min= 9566, max= 9776, avg=9672.50, stdev=85.89, samples=4 00:23:47.143 lat (msec) : 4=0.16%, 10=99.74%, 20=0.10% 00:23:47.143 cpu : usr=63.44%, sys=26.58%, ctx=14, majf=0, minf=7 00:23:47.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:47.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.144 issued rwts: total=19393,19407,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.144 00:23:47.144 Run status group 0 (all jobs): 00:23:47.144 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.4MB), run=2006-2006msec 00:23:47.144 WRITE: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.5MB), run=2006-2006msec 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:47.144 07:09:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:47.144 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:47.144 fio-3.35 00:23:47.144 Starting 1 thread 00:23:49.676 00:23:49.676 test: (groupid=0, jobs=1): err= 0: pid=106181: Sat Jul 13 07:09:57 2024 00:23:49.676 read: IOPS=8065, BW=126MiB/s (132MB/s)(253MiB/2006msec) 00:23:49.676 slat (usec): min=2, max=115, avg= 3.76, stdev= 2.33 00:23:49.676 clat (usec): min=2785, max=17663, avg=9468.92, stdev=2244.34 00:23:49.676 lat (usec): min=2788, max=17666, avg=9472.68, stdev=2244.51 00:23:49.676 clat percentiles (usec): 00:23:49.676 | 1.00th=[ 5080], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7439], 00:23:49.676 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:23:49.676 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12125], 95.00th=[13173], 00:23:49.676 | 99.00th=[15795], 99.50th=[16450], 99.90th=[16909], 99.95th=[17171], 00:23:49.676 | 99.99th=[17433] 00:23:49.676 bw ( KiB/s): min=59392, max=68512, per=51.06%, avg=65896.00, stdev=4347.56, samples=4 00:23:49.676 iops : min= 3712, max= 4282, avg=4118.50, stdev=271.72, samples=4 00:23:49.676 write: IOPS=4880, BW=76.3MiB/s (80.0MB/s)(135MiB/1773msec); 0 zone resets 00:23:49.676 slat (usec): min=31, max=402, avg=37.49, stdev=10.10 00:23:49.676 clat (usec): min=3049, max=21082, avg=11292.23, stdev=1926.98 00:23:49.676 lat (usec): min=3083, max=21129, avg=11329.72, stdev=1928.60 00:23:49.676 clat percentiles (usec): 00:23:49.676 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9634], 00:23:49.676 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:23:49.676 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13829], 95.00th=[15008], 00:23:49.676 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[18482], 00:23:49.676 | 99.99th=[21103] 00:23:49.676 bw ( KiB/s): min=61856, max=71712, per=88.06%, avg=68768.00, stdev=4630.83, samples=4 00:23:49.676 iops : min= 3866, max= 4482, avg=4298.00, stdev=289.43, samples=4 00:23:49.676 lat (msec) : 4=0.19%, 10=48.15%, 20=51.66%, 50=0.01% 00:23:49.676 cpu : usr=66.08%, sys=21.40%, ctx=4, majf=0, minf=4 00:23:49.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:23:49.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:49.676 issued rwts: total=16179,8654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:49.676 00:23:49.676 Run status group 0 (all jobs): 00:23:49.676 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2006-2006msec 00:23:49.676 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=135MiB (142MB), run=1773-1773msec 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:23:49.676 07:09:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:49.677 07:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:23:49.934 Nvme0n1 00:23:49.934 07:09:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=e182f2a4-40b0-4d45-9514-172e3a549d57 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb e182f2a4-40b0-4d45-9514-172e3a549d57 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e182f2a4-40b0-4d45-9514-172e3a549d57 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:23:50.192 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:50.759 { 00:23:50.759 "base_bdev": "Nvme0n1", 00:23:50.759 "block_size": 4096, 00:23:50.759 "cluster_size": 1073741824, 00:23:50.759 "free_clusters": 4, 00:23:50.759 "name": "lvs_0", 00:23:50.759 "total_data_clusters": 4, 00:23:50.759 "uuid": "e182f2a4-40b0-4d45-9514-172e3a549d57" 00:23:50.759 } 00:23:50.759 ]' 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e182f2a4-40b0-4d45-9514-172e3a549d57") .free_clusters' 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e182f2a4-40b0-4d45-9514-172e3a549d57") .cluster_size' 00:23:50.759 4096 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:23:50.759 07:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:23:51.018 bbc97cb4-8fcd-4ac4-ab53-a12f978b646f 00:23:51.018 07:09:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:23:51.018 07:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:23:51.276 07:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:51.842 07:09:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:51.842 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:51.843 07:09:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:51.843 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:51.843 fio-3.35 00:23:51.843 Starting 1 thread 00:23:54.374 00:23:54.374 test: (groupid=0, jobs=1): err= 0: pid=106332: Sat Jul 13 07:10:02 2024 00:23:54.374 read: IOPS=6631, BW=25.9MiB/s (27.2MB/s)(52.0MiB/2008msec) 00:23:54.374 slat (nsec): min=1881, max=427058, avg=2745.16, stdev=4696.76 00:23:54.374 clat (usec): min=4612, max=16987, avg=10206.15, stdev=978.90 00:23:54.374 lat (usec): min=4621, max=16989, avg=10208.90, stdev=978.72 00:23:54.374 clat percentiles (usec): 00:23:54.374 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:23:54.374 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:23:54.374 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11863], 00:23:54.374 | 99.00th=[12649], 99.50th=[13173], 99.90th=[15664], 99.95th=[16319], 00:23:54.374 | 99.99th=[16909] 00:23:54.374 bw ( KiB/s): min=25448, max=27104, per=99.88%, avg=26496.00, stdev=780.97, samples=4 00:23:54.374 iops : min= 6362, max= 6776, avg=6624.00, stdev=195.24, samples=4 00:23:54.374 write: IOPS=6635, BW=25.9MiB/s (27.2MB/s)(52.1MiB/2008msec); 0 zone resets 00:23:54.374 slat (usec): min=2, max=287, avg= 2.84, stdev= 3.30 00:23:54.374 clat (usec): min=2555, max=16523, avg=9018.58, stdev=830.41 00:23:54.374 lat (usec): min=2567, max=16525, avg=9021.42, stdev=830.32 00:23:54.374 clat percentiles (usec): 00:23:54.374 | 1.00th=[ 7177], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:23:54.374 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:23:54.374 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:23:54.374 | 99.00th=[10814], 99.50th=[11207], 99.90th=[14877], 99.95th=[15795], 00:23:54.374 | 99.99th=[16450] 00:23:54.374 bw ( KiB/s): min=26304, max=26816, per=99.96%, avg=26534.00, stdev=211.40, samples=4 00:23:54.374 iops : min= 6576, max= 6704, avg=6633.50, stdev=52.85, samples=4 00:23:54.374 lat (msec) : 4=0.04%, 10=67.04%, 20=32.93% 00:23:54.374 cpu : usr=70.00%, sys=22.87%, ctx=5, majf=0, minf=7 00:23:54.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:54.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:54.374 issued rwts: total=13317,13325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:54.374 00:23:54.374 Run status group 0 (all jobs): 00:23:54.374 READ: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=52.0MiB (54.5MB), run=2008-2008msec 00:23:54.374 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=52.1MiB (54.6MB), run=2008-2008msec 00:23:54.374 07:10:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:54.374 07:10:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=135410cf-7a4f-48db-9b23-ddfa471bcb50 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 135410cf-7a4f-48db-9b23-ddfa471bcb50 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=135410cf-7a4f-48db-9b23-ddfa471bcb50 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:23:54.633 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:54.891 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:54.891 { 00:23:54.891 "base_bdev": "Nvme0n1", 00:23:54.891 "block_size": 4096, 00:23:54.891 "cluster_size": 1073741824, 00:23:54.891 "free_clusters": 0, 00:23:54.891 "name": "lvs_0", 00:23:54.891 "total_data_clusters": 4, 00:23:54.891 "uuid": "e182f2a4-40b0-4d45-9514-172e3a549d57" 00:23:54.891 }, 00:23:54.891 { 00:23:54.891 "base_bdev": "bbc97cb4-8fcd-4ac4-ab53-a12f978b646f", 00:23:54.891 "block_size": 4096, 00:23:54.891 "cluster_size": 4194304, 00:23:54.891 "free_clusters": 1022, 00:23:54.891 "name": "lvs_n_0", 00:23:54.891 "total_data_clusters": 1022, 00:23:54.891 "uuid": "135410cf-7a4f-48db-9b23-ddfa471bcb50" 00:23:54.891 } 00:23:54.891 ]' 00:23:54.891 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="135410cf-7a4f-48db-9b23-ddfa471bcb50") .free_clusters' 00:23:54.891 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:23:54.891 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="135410cf-7a4f-48db-9b23-ddfa471bcb50") .cluster_size' 00:23:55.148 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:55.148 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:23:55.148 4088 00:23:55.148 07:10:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:23:55.148 07:10:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:23:55.148 c76ec12f-9b22-458d-8efd-9b651a51c91c 00:23:55.148 07:10:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:23:55.407 07:10:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:23:55.665 07:10:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:55.931 07:10:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:56.232 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:56.232 fio-3.35 00:23:56.232 Starting 1 thread 00:23:58.776 00:23:58.776 test: (groupid=0, jobs=1): err= 0: pid=106447: Sat Jul 13 07:10:06 2024 00:23:58.776 read: IOPS=5439, BW=21.2MiB/s (22.3MB/s)(43.6MiB/2050msec) 00:23:58.776 slat (nsec): min=1898, max=389870, avg=2888.27, stdev=4900.33 00:23:58.776 clat (usec): min=5374, max=61556, avg=12356.67, stdev=3306.26 00:23:58.776 lat (usec): min=5384, max=61558, avg=12359.56, stdev=3306.17 00:23:58.776 clat percentiles (usec): 00:23:58.776 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:23:58.776 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:23:58.776 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:23:58.776 | 99.00th=[14877], 99.50th=[50594], 99.90th=[58983], 99.95th=[60556], 00:23:58.776 | 99.99th=[61604] 00:23:58.776 bw ( KiB/s): min=20944, max=23016, per=100.00%, avg=22188.00, stdev=893.77, samples=4 00:23:58.776 iops : min= 5236, max= 5754, avg=5547.00, stdev=223.44, samples=4 00:23:58.776 write: IOPS=5413, BW=21.1MiB/s (22.2MB/s)(43.3MiB/2050msec); 0 zone resets 00:23:58.776 slat (usec): min=2, max=289, avg= 2.97, stdev= 3.70 00:23:58.776 clat (usec): min=2591, max=59985, avg=11158.84, stdev=3588.38 00:23:58.776 lat (usec): min=2603, max=59987, avg=11161.81, stdev=3588.30 00:23:58.776 clat percentiles (usec): 00:23:58.776 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:23:58.776 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:23:58.776 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:23:58.776 | 99.00th=[13173], 99.50th=[51643], 99.90th=[57410], 99.95th=[58983], 00:23:58.776 | 99.99th=[58983] 00:23:58.776 bw ( KiB/s): min=21888, max=22400, per=100.00%, avg=22054.00, stdev=237.26, samples=4 00:23:58.776 iops : min= 5472, max= 5600, avg=5513.50, stdev=59.32, samples=4 00:23:58.776 lat (msec) : 4=0.02%, 10=7.95%, 20=91.46%, 50=0.01%, 100=0.57% 00:23:58.776 cpu : usr=71.60%, sys=21.86%, ctx=45, majf=0, minf=7 00:23:58.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:23:58.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:58.777 issued rwts: total=11151,11097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.777 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:58.777 00:23:58.777 Run status group 0 (all jobs): 00:23:58.777 READ: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=43.6MiB (45.7MB), run=2050-2050msec 00:23:58.777 WRITE: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=43.3MiB (45.5MB), run=2050-2050msec 00:23:58.777 07:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:58.777 07:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:23:58.777 07:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:23:59.035 07:10:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:59.294 07:10:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:23:59.552 07:10:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:59.552 07:10:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.119 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.119 rmmod nvme_tcp 00:24:00.119 rmmod nvme_fabrics 00:24:00.119 rmmod nvme_keyring 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 106006 ']' 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 106006 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 106006 ']' 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 106006 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106006 00:24:00.378 killing process with pid 106006 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106006' 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 106006 00:24:00.378 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 106006 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:00.638 00:24:00.638 real 0m19.187s 00:24:00.638 user 1m23.739s 00:24:00.638 sys 0m4.662s 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:00.638 07:10:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.638 ************************************ 00:24:00.638 END TEST nvmf_fio_host 00:24:00.638 ************************************ 00:24:00.638 07:10:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:00.638 07:10:08 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:00.638 07:10:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:00.638 07:10:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.638 07:10:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.638 ************************************ 00:24:00.638 START TEST nvmf_failover 00:24:00.638 ************************************ 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:00.638 * Looking for test storage... 00:24:00.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 07:10:08 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:00.639 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:00.898 Cannot find device "nvmf_tgt_br" 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.898 Cannot find device "nvmf_tgt_br2" 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:00.898 Cannot find device "nvmf_tgt_br" 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:00.898 Cannot find device "nvmf_tgt_br2" 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:00.898 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:01.157 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:01.157 07:10:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:01.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:24:01.157 00:24:01.157 --- 10.0.0.2 ping statistics --- 00:24:01.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.157 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:01.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:01.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:24:01.157 00:24:01.157 --- 10.0.0.3 ping statistics --- 00:24:01.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.157 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:01.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:24:01.157 00:24:01.157 --- 10.0.0.1 ping statistics --- 00:24:01.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.157 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=106718 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 106718 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 106718 ']' 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:01.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.157 07:10:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.157 [2024-07-13 07:10:09.115107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:01.157 [2024-07-13 07:10:09.115222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.416 [2024-07-13 07:10:09.256145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:01.416 [2024-07-13 07:10:09.338793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.416 [2024-07-13 07:10:09.338845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.416 [2024-07-13 07:10:09.338871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.416 [2024-07-13 07:10:09.338879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.416 [2024-07-13 07:10:09.338886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.416 [2024-07-13 07:10:09.339099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.416 [2024-07-13 07:10:09.339719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.416 [2024-07-13 07:10:09.339728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.350 07:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.606 [2024-07-13 07:10:10.447012] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.606 07:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:02.863 Malloc0 00:24:02.863 07:10:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.121 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.379 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.379 [2024-07-13 07:10:11.449776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.637 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:03.637 [2024-07-13 07:10:11.666095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.637 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:03.895 [2024-07-13 07:10:11.878370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=106830 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 106830 /var/tmp/bdevperf.sock 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 106830 ']' 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.895 07:10:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:04.828 07:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.828 07:10:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:04.828 07:10:12 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.394 NVMe0n1 00:24:05.394 07:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:05.394 00:24:05.394 07:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=106876 00:24:05.394 07:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.394 07:10:13 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:06.770 07:10:14 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.770 [2024-07-13 07:10:14.691203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.770 [2024-07-13 07:10:14.691781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 [2024-07-13 07:10:14.691848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157690 is same with the state(5) to be set 00:24:06.771 07:10:14 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:10.056 07:10:17 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:10.056 00:24:10.056 07:10:18 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:10.316 [2024-07-13 07:10:18.268211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 [2024-07-13 07:10:18.268744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21582d0 is same with the state(5) to be set 00:24:10.316 07:10:18 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:13.600 07:10:21 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.600 [2024-07-13 07:10:21.496475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.600 07:10:21 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:14.535 07:10:22 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:14.794 [2024-07-13 07:10:22.721894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.721988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.794 [2024-07-13 07:10:22.722099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 [2024-07-13 07:10:22.722459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faf230 is same with the state(5) to be set 00:24:14.817 07:10:22 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 106876 00:24:21.423 0 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 106830 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 106830 ']' 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 106830 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106830 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.423 killing process with pid 106830 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106830' 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 106830 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 106830 00:24:21.423 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:21.423 [2024-07-13 07:10:11.961773] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:21.423 [2024-07-13 07:10:11.961938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106830 ] 00:24:21.423 [2024-07-13 07:10:12.102818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.423 [2024-07-13 07:10:12.240207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.423 Running I/O for 15 seconds... 00:24:21.423 [2024-07-13 07:10:14.692172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.692978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.692991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.423 [2024-07-13 07:10:14.693247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.423 [2024-07-13 07:10:14.693259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.693827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.693853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.693879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.693905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.693930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.693956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.693982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.693995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.424 [2024-07-13 07:10:14.694237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.424 [2024-07-13 07:10:14.694475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.424 [2024-07-13 07:10:14.694486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.694978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.694997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.425 [2024-07-13 07:10:14.695670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.425 [2024-07-13 07:10:14.695682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:14.695709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:14.695735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:14.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:14.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:14.695821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:14.695847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3390 is same with the state(5) to be set 00:24:21.426 [2024-07-13 07:10:14.695883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.426 [2024-07-13 07:10:14.695894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.426 [2024-07-13 07:10:14.695904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 00:24:21.426 [2024-07-13 07:10:14.695916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.695981] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ef3390 was disconnected and freed. reset controller. 00:24:21.426 [2024-07-13 07:10:14.696013] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:21.426 [2024-07-13 07:10:14.696069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:14.696088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.696102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:14.696115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.696128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:14.696140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.696152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:14.696164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:14.696176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.426 [2024-07-13 07:10:14.696214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3110 (9): Bad file descriptor 00:24:21.426 [2024-07-13 07:10:14.699590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.426 [2024-07-13 07:10:14.731527] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.426 [2024-07-13 07:10:18.267788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:18.267879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.267898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:18.267953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.267967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:18.267989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.268002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.426 [2024-07-13 07:10:18.268013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.268025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3110 is same with the state(5) to be set 00:24:21.426 [2024-07-13 07:10:18.269674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.269975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.269988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.426 [2024-07-13 07:10:18.270288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.426 [2024-07-13 07:10:18.270311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.270974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.270998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.427 [2024-07-13 07:10:18.271011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.427 [2024-07-13 07:10:18.271234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.427 [2024-07-13 07:10:18.271246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.271969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.271991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.428 [2024-07-13 07:10:18.272406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.428 [2024-07-13 07:10:18.272420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.429 [2024-07-13 07:10:18.272787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.272831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121048 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.272843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.272874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.272884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121056 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.272896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.272917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.272926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121064 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.272937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.272958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.272967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121072 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.272979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.272990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.272999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121080 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121088 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121096 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121104 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121112 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121120 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121128 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121136 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.273354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120504 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.273365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.273377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.273386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.283733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120512 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.283764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.283794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.429 [2024-07-13 07:10:18.283805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.429 [2024-07-13 07:10:18.283815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120520 len:8 PRP1 0x0 PRP2 0x0 00:24:21.429 [2024-07-13 07:10:18.283827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:18.283891] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ebd780 was disconnected and freed. reset controller. 00:24:21.429 [2024-07-13 07:10:18.283908] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:21.429 [2024-07-13 07:10:18.283922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.429 [2024-07-13 07:10:18.283972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3110 (9): Bad file descriptor 00:24:21.429 [2024-07-13 07:10:18.287239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.429 [2024-07-13 07:10:18.317891] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.429 [2024-07-13 07:10:22.723393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.429 [2024-07-13 07:10:22.723457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.429 [2024-07-13 07:10:22.723482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.723973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.723986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.430 [2024-07-13 07:10:22.724735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.430 [2024-07-13 07:10:22.724749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.724977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.431 [2024-07-13 07:10:22.724989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.725003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.431 [2024-07-13 07:10:22.725015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.725029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.431 [2024-07-13 07:10:22.725041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.431 [2024-07-13 07:10:22.725067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.431 [2024-07-13 07:10:22.725093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.725107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.431 [2024-07-13 07:10:22.725119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.431 [2024-07-13 07:10:22.725133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.431 [2024-07-13 07:10:22.725151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.432 [2024-07-13 07:10:22.725501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.432 [2024-07-13 07:10:22.725528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.432 [2024-07-13 07:10:22.725577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.432 [2024-07-13 07:10:22.725606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.432 [2024-07-13 07:10:22.725633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.432 [2024-07-13 07:10:22.725660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.432 [2024-07-13 07:10:22.725688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.432 [2024-07-13 07:10:22.725702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.725985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.725997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:21.433 [2024-07-13 07:10:22.726892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.433 [2024-07-13 07:10:22.726945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.433 [2024-07-13 07:10:22.726959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105248 len:8 PRP1 0x0 PRP2 0x0 00:24:21.433 [2024-07-13 07:10:22.726972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.726988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104400 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104408 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104416 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104424 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104432 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:21.434 [2024-07-13 07:10:22.727268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:21.434 [2024-07-13 07:10:22.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104440 len:8 PRP1 0x0 PRP2 0x0 00:24:21.434 [2024-07-13 07:10:22.727297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727362] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ebd570 was disconnected and freed. reset controller. 00:24:21.434 [2024-07-13 07:10:22.727379] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:21.434 [2024-07-13 07:10:22.727434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.434 [2024-07-13 07:10:22.727462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.434 [2024-07-13 07:10:22.727489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.434 [2024-07-13 07:10:22.727514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.434 [2024-07-13 07:10:22.727539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.434 [2024-07-13 07:10:22.727565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.434 [2024-07-13 07:10:22.731017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.434 [2024-07-13 07:10:22.731056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3110 (9): Bad file descriptor 00:24:21.434 [2024-07-13 07:10:22.769133] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:21.434 00:24:21.434 Latency(us) 00:24:21.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:21.434 Verification LBA range: start 0x0 length 0x4000 00:24:21.434 NVMe0n1 : 15.01 9687.90 37.84 238.04 0.00 12867.30 558.55 23592.96 00:24:21.434 =================================================================================================================== 00:24:21.434 Total : 9687.90 37.84 238.04 0.00 12867.30 558.55 23592.96 00:24:21.434 Received shutdown signal, test time was about 15.000000 seconds 00:24:21.434 00:24:21.434 Latency(us) 00:24:21.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.434 =================================================================================================================== 00:24:21.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=107080 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 107080 /var/tmp/bdevperf.sock 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 107080 ']' 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.434 07:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.002 07:10:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.002 07:10:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:22.002 07:10:29 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.261 [2024-07-13 07:10:30.177242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.261 07:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.520 [2024-07-13 07:10:30.401332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:22.520 07:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:22.779 NVMe0n1 00:24:22.779 07:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.038 00:24:23.038 07:10:30 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.297 00:24:23.297 07:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.297 07:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:23.555 07:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.813 07:10:31 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:27.098 07:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.098 07:10:34 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:27.098 07:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.098 07:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=107218 00:24:27.098 07:10:35 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 107218 00:24:28.477 0 00:24:28.477 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:28.477 [2024-07-13 07:10:28.981544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:28.477 [2024-07-13 07:10:28.981665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107080 ] 00:24:28.477 [2024-07-13 07:10:29.117007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.477 [2024-07-13 07:10:29.218957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.477 [2024-07-13 07:10:31.748775] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:28.477 [2024-07-13 07:10:31.748973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.477 [2024-07-13 07:10:31.749007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.477 [2024-07-13 07:10:31.749025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.477 [2024-07-13 07:10:31.749037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.477 [2024-07-13 07:10:31.749050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.477 [2024-07-13 07:10:31.749063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.477 [2024-07-13 07:10:31.749077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.477 [2024-07-13 07:10:31.749089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.477 [2024-07-13 07:10:31.749103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.477 [2024-07-13 07:10:31.749151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.477 [2024-07-13 07:10:31.749182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfc110 (9): Bad file descriptor 00:24:28.477 [2024-07-13 07:10:31.755940] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.477 Running I/O for 1 seconds... 00:24:28.477 00:24:28.477 Latency(us) 00:24:28.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.477 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:28.477 Verification LBA range: start 0x0 length 0x4000 00:24:28.477 NVMe0n1 : 1.01 8929.70 34.88 0.00 0.00 14258.09 1936.29 16324.42 00:24:28.477 =================================================================================================================== 00:24:28.477 Total : 8929.70 34.88 0.00 0.00 14258.09 1936.29 16324.42 00:24:28.477 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:28.477 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.477 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:28.736 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.736 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:28.995 07:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.254 07:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 107080 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 107080 ']' 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 107080 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107080 00:24:32.540 killing process with pid 107080 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107080' 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 107080 00:24:32.540 07:10:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 107080 00:24:32.797 07:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:32.797 07:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.055 rmmod nvme_tcp 00:24:33.055 rmmod nvme_fabrics 00:24:33.055 rmmod nvme_keyring 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 106718 ']' 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 106718 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 106718 ']' 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 106718 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106718 00:24:33.055 killing process with pid 106718 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106718' 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 106718 00:24:33.055 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 106718 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:33.621 00:24:33.621 real 0m32.888s 00:24:33.621 user 2m7.472s 00:24:33.621 sys 0m4.643s 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.621 07:10:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.621 ************************************ 00:24:33.621 END TEST nvmf_failover 00:24:33.621 ************************************ 00:24:33.621 07:10:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:33.621 07:10:41 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:33.621 07:10:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:33.621 07:10:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.621 07:10:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.621 ************************************ 00:24:33.621 START TEST nvmf_host_discovery 00:24:33.621 ************************************ 00:24:33.621 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:33.621 * Looking for test storage... 00:24:33.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:33.622 Cannot find device "nvmf_tgt_br" 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.622 Cannot find device "nvmf_tgt_br2" 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:33.622 Cannot find device "nvmf_tgt_br" 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:33.622 Cannot find device "nvmf_tgt_br2" 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:33.622 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:33.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:33.880 00:24:33.880 --- 10.0.0.2 ping statistics --- 00:24:33.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.880 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:33.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:33.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:24:33.880 00:24:33.880 --- 10.0.0.3 ping statistics --- 00:24:33.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.880 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:33.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:24:33.880 00:24:33.880 --- 10.0.0.1 ping statistics --- 00:24:33.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.880 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.880 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.137 07:10:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:34.137 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.137 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.137 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107513 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107513 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107513 ']' 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.138 07:10:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.138 [2024-07-13 07:10:42.035990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:34.138 [2024-07-13 07:10:42.036102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.138 [2024-07-13 07:10:42.177681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.395 [2024-07-13 07:10:42.300096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.395 [2024-07-13 07:10:42.300173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.395 [2024-07-13 07:10:42.300184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.395 [2024-07-13 07:10:42.300192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.395 [2024-07-13 07:10:42.300198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.395 [2024-07-13 07:10:42.300224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.966 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.966 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:34.966 07:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.966 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.967 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 [2024-07-13 07:10:43.068511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 [2024-07-13 07:10:43.080656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 null0 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 null1 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=107569 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 107569 /tmp/host.sock 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 107569 ']' 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.231 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.231 07:10:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 [2024-07-13 07:10:43.173642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:35.231 [2024-07-13 07:10:43.173746] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107569 ] 00:24:35.489 [2024-07-13 07:10:43.315091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.489 [2024-07-13 07:10:43.417639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.421 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.422 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.679 [2024-07-13 07:10:44.573039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.679 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:36.680 07:10:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:36.937 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.937 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:36.937 07:10:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:37.196 [2024-07-13 07:10:45.194701] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:37.196 [2024-07-13 07:10:45.194737] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:37.196 [2024-07-13 07:10:45.194772] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.454 [2024-07-13 07:10:45.280890] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:37.454 [2024-07-13 07:10:45.337858] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:37.454 [2024-07-13 07:10:45.337909] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.021 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.022 07:10:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.022 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.281 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.282 [2024-07-13 07:10:46.161668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:38.282 [2024-07-13 07:10:46.162168] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:38.282 [2024-07-13 07:10:46.162213] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.282 [2024-07-13 07:10:46.248244] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:38.282 [2024-07-13 07:10:46.307568] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:38.282 [2024-07-13 07:10:46.307602] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:38.282 [2024-07-13 07:10:46.307610] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:38.282 07:10:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.722 [2024-07-13 07:10:47.450591] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:39.722 [2024-07-13 07:10:47.450633] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.722 [2024-07-13 07:10:47.455478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.722 [2024-07-13 07:10:47.455515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.722 [2024-07-13 07:10:47.455530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.722 [2024-07-13 07:10:47.455540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.722 [2024-07-13 07:10:47.455561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.722 [2024-07-13 07:10:47.455573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.722 [2024-07-13 07:10:47.455589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.722 [2024-07-13 07:10:47.455599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.722 [2024-07-13 07:10:47.455609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.722 [2024-07-13 07:10:47.465425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.722 [2024-07-13 07:10:47.475458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.722 [2024-07-13 07:10:47.475640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.722 [2024-07-13 07:10:47.475667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.722 [2024-07-13 07:10:47.475680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.722 [2024-07-13 07:10:47.475698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.722 [2024-07-13 07:10:47.475727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.722 [2024-07-13 07:10:47.475740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.722 [2024-07-13 07:10:47.475751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.722 [2024-07-13 07:10:47.475768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.722 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.722 [2024-07-13 07:10:47.485521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.722 [2024-07-13 07:10:47.485652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.722 [2024-07-13 07:10:47.485674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.722 [2024-07-13 07:10:47.485685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.722 [2024-07-13 07:10:47.485701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.722 [2024-07-13 07:10:47.485716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.722 [2024-07-13 07:10:47.485726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.722 [2024-07-13 07:10:47.485735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.722 [2024-07-13 07:10:47.485761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.722 [2024-07-13 07:10:47.495631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.722 [2024-07-13 07:10:47.495725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.722 [2024-07-13 07:10:47.495748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.722 [2024-07-13 07:10:47.495759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.722 [2024-07-13 07:10:47.495775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.722 [2024-07-13 07:10:47.495821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.722 [2024-07-13 07:10:47.495834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.722 [2024-07-13 07:10:47.495844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.722 [2024-07-13 07:10:47.495859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.723 [2024-07-13 07:10:47.505688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.723 [2024-07-13 07:10:47.505804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.723 [2024-07-13 07:10:47.505825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.723 [2024-07-13 07:10:47.505836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.723 [2024-07-13 07:10:47.505853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.723 [2024-07-13 07:10:47.505878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.723 [2024-07-13 07:10:47.505889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.723 [2024-07-13 07:10:47.505899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.723 [2024-07-13 07:10:47.505914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.723 [2024-07-13 07:10:47.515759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.723 [2024-07-13 07:10:47.515852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.723 [2024-07-13 07:10:47.515874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.723 [2024-07-13 07:10:47.515885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.723 [2024-07-13 07:10:47.515901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.723 [2024-07-13 07:10:47.515927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.723 [2024-07-13 07:10:47.515939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.723 [2024-07-13 07:10:47.515948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.723 [2024-07-13 07:10:47.515963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.723 [2024-07-13 07:10:47.525810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.723 [2024-07-13 07:10:47.525884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.723 [2024-07-13 07:10:47.525905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.723 [2024-07-13 07:10:47.525916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.723 [2024-07-13 07:10:47.525933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.723 [2024-07-13 07:10:47.525957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.723 [2024-07-13 07:10:47.525968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.723 [2024-07-13 07:10:47.525977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.723 [2024-07-13 07:10:47.525992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.723 [2024-07-13 07:10:47.535856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.723 [2024-07-13 07:10:47.535953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.723 [2024-07-13 07:10:47.535975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14daaf0 with addr=10.0.0.2, port=4420 00:24:39.723 [2024-07-13 07:10:47.535987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14daaf0 is same with the state(5) to be set 00:24:39.723 [2024-07-13 07:10:47.536003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14daaf0 (9): Bad file descriptor 00:24:39.723 [2024-07-13 07:10:47.536019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:39.723 [2024-07-13 07:10:47.536028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:39.723 [2024-07-13 07:10:47.536037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:39.723 [2024-07-13 07:10:47.536052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.723 [2024-07-13 07:10:47.536740] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:39.723 [2024-07-13 07:10:47.536771] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:39.723 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.724 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.983 07:10:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.918 [2024-07-13 07:10:48.904955] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:40.918 [2024-07-13 07:10:48.904997] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:40.918 [2024-07-13 07:10:48.905032] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:40.918 [2024-07-13 07:10:48.991110] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:41.177 [2024-07-13 07:10:49.051678] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:41.177 [2024-07-13 07:10:49.051758] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.177 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.177 2024/07/13 07:10:49 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:41.177 request: 00:24:41.177 { 00:24:41.177 "method": "bdev_nvme_start_discovery", 00:24:41.177 "params": { 00:24:41.177 "name": "nvme", 00:24:41.178 "trtype": "tcp", 00:24:41.178 "traddr": "10.0.0.2", 00:24:41.178 "adrfam": "ipv4", 00:24:41.178 "trsvcid": "8009", 00:24:41.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:41.178 "wait_for_attach": true 00:24:41.178 } 00:24:41.178 } 00:24:41.178 Got JSON-RPC error response 00:24:41.178 GoRPCClient: error on JSON-RPC call 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.178 2024/07/13 07:10:49 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:41.178 request: 00:24:41.178 { 00:24:41.178 "method": "bdev_nvme_start_discovery", 00:24:41.178 "params": { 00:24:41.178 "name": "nvme_second", 00:24:41.178 "trtype": "tcp", 00:24:41.178 "traddr": "10.0.0.2", 00:24:41.178 "adrfam": "ipv4", 00:24:41.178 "trsvcid": "8009", 00:24:41.178 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:41.178 "wait_for_attach": true 00:24:41.178 } 00:24:41.178 } 00:24:41.178 Got JSON-RPC error response 00:24:41.178 GoRPCClient: error on JSON-RPC call 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:41.178 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.438 07:10:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.375 [2024-07-13 07:10:50.316372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.375 [2024-07-13 07:10:50.316475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14da680 with addr=10.0.0.2, port=8010 00:24:42.375 [2024-07-13 07:10:50.316501] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:42.375 [2024-07-13 07:10:50.316512] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:42.375 [2024-07-13 07:10:50.316522] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:43.312 [2024-07-13 07:10:51.316418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.312 [2024-07-13 07:10:51.316515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14da680 with addr=10.0.0.2, port=8010 00:24:43.312 [2024-07-13 07:10:51.316545] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:43.312 [2024-07-13 07:10:51.316577] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:43.312 [2024-07-13 07:10:51.316589] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:44.247 [2024-07-13 07:10:52.316218] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:44.248 2024/07/13 07:10:52 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:24:44.248 request: 00:24:44.248 { 00:24:44.248 "method": "bdev_nvme_start_discovery", 00:24:44.248 "params": { 00:24:44.248 "name": "nvme_second", 00:24:44.248 "trtype": "tcp", 00:24:44.248 "traddr": "10.0.0.2", 00:24:44.248 "adrfam": "ipv4", 00:24:44.248 "trsvcid": "8010", 00:24:44.248 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:44.248 "wait_for_attach": false, 00:24:44.248 "attach_timeout_ms": 3000 00:24:44.248 } 00:24:44.248 } 00:24:44.248 Got JSON-RPC error response 00:24:44.248 GoRPCClient: error on JSON-RPC call 00:24:44.248 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:44.248 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:44.248 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.248 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.248 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 107569 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.506 rmmod nvme_tcp 00:24:44.506 rmmod nvme_fabrics 00:24:44.506 rmmod nvme_keyring 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107513 ']' 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107513 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 107513 ']' 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 107513 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107513 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.506 killing process with pid 107513 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107513' 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 107513 00:24:44.506 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 107513 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:44.765 00:24:44.765 real 0m11.327s 00:24:44.765 user 0m22.243s 00:24:44.765 sys 0m1.766s 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.765 ************************************ 00:24:44.765 07:10:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.765 END TEST nvmf_host_discovery 00:24:44.765 ************************************ 00:24:45.022 07:10:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:45.022 07:10:52 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:45.022 07:10:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.022 07:10:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.022 07:10:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.022 ************************************ 00:24:45.022 START TEST nvmf_host_multipath_status 00:24:45.022 ************************************ 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:45.022 * Looking for test storage... 00:24:45.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.022 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:45.023 07:10:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:45.023 Cannot find device "nvmf_tgt_br" 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.023 Cannot find device "nvmf_tgt_br2" 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:45.023 Cannot find device "nvmf_tgt_br" 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:45.023 Cannot find device "nvmf_tgt_br2" 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:24:45.023 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:45.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:45.281 00:24:45.281 --- 10.0.0.2 ping statistics --- 00:24:45.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.281 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:45.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:24:45.281 00:24:45.281 --- 10.0.0.3 ping statistics --- 00:24:45.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.281 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:45.281 00:24:45.281 --- 10.0.0.1 ping statistics --- 00:24:45.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.281 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.281 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=108051 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 108051 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108051 ']' 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.540 07:10:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.540 [2024-07-13 07:10:53.426657] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:45.540 [2024-07-13 07:10:53.426761] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.540 [2024-07-13 07:10:53.568762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:45.798 [2024-07-13 07:10:53.679248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.798 [2024-07-13 07:10:53.679323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.798 [2024-07-13 07:10:53.679345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.798 [2024-07-13 07:10:53.679356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.798 [2024-07-13 07:10:53.679365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.798 [2024-07-13 07:10:53.679542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.798 [2024-07-13 07:10:53.679569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.367 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.367 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:46.367 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.367 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.367 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:46.626 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.626 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=108051 00:24:46.626 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:46.626 [2024-07-13 07:10:54.652415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.626 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:46.884 Malloc0 00:24:46.885 07:10:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:47.452 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.452 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.712 [2024-07-13 07:10:55.659558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.712 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:47.971 [2024-07-13 07:10:55.871657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=108149 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 108149 /var/tmp/bdevperf.sock 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 108149 ']' 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.971 07:10:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:48.909 07:10:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.909 07:10:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:48.909 07:10:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:49.168 07:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:49.427 Nvme0n1 00:24:49.427 07:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:49.694 Nvme0n1 00:24:49.694 07:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:49.694 07:10:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:52.228 07:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:52.228 07:10:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:52.228 07:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.228 07:11:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:53.164 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.423 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:53.681 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:53.681 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:53.681 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.681 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:53.940 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.940 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:53.940 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:53.940 07:11:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.199 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.199 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:54.199 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.199 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.457 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.457 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.457 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.457 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.716 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.716 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:54.716 07:11:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:54.972 07:11:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.538 07:11:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.479 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:56.737 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.737 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:56.737 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:56.737 07:11:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.995 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:56.995 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:56.995 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.995 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.253 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.253 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.253 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.253 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.512 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.512 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.512 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.512 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.770 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.770 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:57.770 07:11:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.029 07:11:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:58.287 07:11:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:59.222 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:59.222 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.222 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.222 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.480 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.480 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:59.480 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.480 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.739 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.739 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.739 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.739 07:11:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.997 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.997 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.997 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.997 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.255 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.255 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.255 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.255 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.513 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.513 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.513 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.513 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.771 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.771 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:00.771 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:01.030 07:11:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:01.288 07:11:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:02.222 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:02.222 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.222 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.222 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.480 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.480 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:02.480 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.480 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.738 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.738 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.738 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.738 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.996 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.996 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.996 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.996 07:11:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.254 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.254 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.254 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.254 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.512 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.512 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:03.512 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.512 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.769 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.769 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:03.769 07:11:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:04.026 07:11:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:04.284 07:11:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:05.220 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:05.220 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:05.220 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.220 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.479 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.479 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.479 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.479 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.737 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.738 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.738 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.738 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.996 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.996 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.996 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.996 07:11:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.255 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.255 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:06.255 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.255 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.514 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.514 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.514 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.514 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.773 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.773 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:06.773 07:11:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:07.032 07:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:07.290 07:11:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:08.224 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:08.224 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:08.224 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.224 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.482 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.482 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:08.482 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.482 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.741 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.741 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.741 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.741 07:11:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.017 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.017 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:09.276 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.276 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:09.276 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.276 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:09.276 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.276 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.535 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.535 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.535 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.535 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.794 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.794 07:11:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:10.053 07:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:10.053 07:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:10.312 07:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:10.572 07:11:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:11.506 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:11.506 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:11.506 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.506 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.764 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.764 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:11.764 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.764 07:11:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:12.021 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.021 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:12.021 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:12.021 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.279 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.279 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:12.279 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:12.279 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.538 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.538 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:12.538 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.538 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.797 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.797 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:12.797 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.797 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.056 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.056 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:13.057 07:11:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:13.316 07:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:13.575 07:11:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:14.536 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:14.536 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:14.536 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.536 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:14.794 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.794 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:14.794 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:14.794 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.052 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.052 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.052 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.052 07:11:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.310 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.310 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.310 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.310 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.569 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.569 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.569 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.569 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:15.828 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.828 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:15.828 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.828 07:11:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.087 07:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.087 07:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:16.087 07:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:16.345 07:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:16.604 07:11:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:17.541 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:17.541 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:17.541 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.541 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:17.800 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.800 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:17.800 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.800 07:11:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.059 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.059 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.059 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.059 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.317 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.317 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.317 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.317 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.576 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.576 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.576 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.576 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.837 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.837 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.837 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.837 07:11:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.096 07:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.096 07:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:19.096 07:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.354 07:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:19.613 07:11:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:20.549 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:20.549 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.549 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.549 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.808 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.808 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:20.808 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.808 07:11:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.066 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.066 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.066 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.066 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.325 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.325 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.325 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.325 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.891 07:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 108149 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108149 ']' 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108149 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108149 00:25:22.458 killing process with pid 108149 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108149' 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108149 00:25:22.458 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108149 00:25:22.458 Connection closed with partial response: 00:25:22.458 00:25:22.458 00:25:22.743 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 108149 00:25:22.743 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:22.743 [2024-07-13 07:10:55.951457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:22.743 [2024-07-13 07:10:55.951634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108149 ] 00:25:22.743 [2024-07-13 07:10:56.097199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.743 [2024-07-13 07:10:56.209519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.743 Running I/O for 90 seconds... 00:25:22.743 [2024-07-13 07:11:12.006493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.006949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.006968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.743 [2024-07-13 07:11:12.007208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.743 [2024-07-13 07:11:12.007229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.007970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.007989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.008003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.008021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.008035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.008054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.008069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.010925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.010976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.010992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.011953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.011972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.011985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.012050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.012082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.012115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.012147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.744 [2024-07-13 07:11:12.012179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.012213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.012246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.744 [2024-07-13 07:11:12.012266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.744 [2024-07-13 07:11:12.012279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.012974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.012992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.013551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.013581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.014149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.014172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.014195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.014221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.014242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.014257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.014277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.745 [2024-07-13 07:11:12.014291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.745 [2024-07-13 07:11:12.014310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.014978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.746 [2024-07-13 07:11:12.015858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.746 [2024-07-13 07:11:12.015878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.015897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.015926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.015960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.015973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.015991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.016976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.016991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.017827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.017852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.017876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.017892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.017912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.017927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.747 [2024-07-13 07:11:12.017946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.747 [2024-07-13 07:11:12.017974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.017992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.748 [2024-07-13 07:11:12.018283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.018967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.018981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.748 [2024-07-13 07:11:12.019494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.748 [2024-07-13 07:11:12.019508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.019528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.019542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.019562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.019576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.019604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.019632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.019653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.019668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.019688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.019703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.020955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.749 [2024-07-13 07:11:12.021578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.749 [2024-07-13 07:11:12.021619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.021988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.022978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.022997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.023012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.023030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.023044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.023062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.023075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.023094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.023107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.023126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.023140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.024003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.024028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.024051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.750 [2024-07-13 07:11:12.024067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.750 [2024-07-13 07:11:12.024086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.751 [2024-07-13 07:11:12.024475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.024976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.024995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.025009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.025028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.025041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.025060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.025073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.025093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.031810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.031871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.031891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.031911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.031924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.031945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.031958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.031977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.031990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.032008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.032022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.032040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.032054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.032072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.751 [2024-07-13 07:11:12.032086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.751 [2024-07-13 07:11:12.032104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.032496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.032509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.033971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.033990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.034004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.034022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.034035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.034053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.034067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.034085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.034098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.034116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.752 [2024-07-13 07:11:12.034129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.752 [2024-07-13 07:11:12.034154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.034972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.034986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.753 [2024-07-13 07:11:12.035541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.753 [2024-07-13 07:11:12.035555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.035583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.035599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.035618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.035632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.035650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.035663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.035682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.035695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.035713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.035727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.035759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.036795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.036827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.036891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.036923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.036954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.036973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.036987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.754 [2024-07-13 07:11:12.037026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.754 [2024-07-13 07:11:12.037684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.754 [2024-07-13 07:11:12.037698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.037972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.037992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.038978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.038996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.755 [2024-07-13 07:11:12.039684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.755 [2024-07-13 07:11:12.039705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.039973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.039990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.756 [2024-07-13 07:11:12.040870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.756 [2024-07-13 07:11:12.040888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.040909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.040928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.040942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.040961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.040974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.040993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.041350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.041364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.757 [2024-07-13 07:11:12.042699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.042981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.042999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.043013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.043031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.043045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.757 [2024-07-13 07:11:12.043063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.757 [2024-07-13 07:11:12.043077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.043858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.043872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.044978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.044991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.758 [2024-07-13 07:11:12.045009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.758 [2024-07-13 07:11:12.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.045983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.045997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.759 [2024-07-13 07:11:12.046413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.759 [2024-07-13 07:11:12.046435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.046976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.046994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.047976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.047996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.048010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.048042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.048074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.048106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.760 [2024-07-13 07:11:12.048329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.760 [2024-07-13 07:11:12.048368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.760 [2024-07-13 07:11:12.048388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.048984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.048998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.049452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.049990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.761 [2024-07-13 07:11:12.050245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.761 [2024-07-13 07:11:12.050259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.050969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.050982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.762 [2024-07-13 07:11:12.051708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.762 [2024-07-13 07:11:12.051721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.051977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.052570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.052586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.763 [2024-07-13 07:11:12.053733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.763 [2024-07-13 07:11:12.053765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.763 [2024-07-13 07:11:12.053797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.763 [2024-07-13 07:11:12.053840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.763 [2024-07-13 07:11:12.053871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.763 [2024-07-13 07:11:12.053890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.763 [2024-07-13 07:11:12.053903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.053921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.764 [2024-07-13 07:11:12.053935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.053953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.764 [2024-07-13 07:11:12.053967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.053986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.053999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.054975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.054988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.055006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.055020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.055038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.055051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.055070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.055083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.055109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.055124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.764 [2024-07-13 07:11:12.055706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.764 [2024-07-13 07:11:12.055732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.055970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.055983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.056969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.056987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.057001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.057019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.057033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.057051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.057064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.057082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.057096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.057128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.765 [2024-07-13 07:11:12.057146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.765 [2024-07-13 07:11:12.057160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.057970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.057984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.058234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.058247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.766 [2024-07-13 07:11:12.059310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.766 [2024-07-13 07:11:12.059329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.767 [2024-07-13 07:11:12.059707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.059977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.059990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.767 [2024-07-13 07:11:12.060737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.767 [2024-07-13 07:11:12.060750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.060769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.060783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.061973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.061987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.768 [2024-07-13 07:11:12.062533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.768 [2024-07-13 07:11:12.062547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.062969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.062987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.063955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.063969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.769 [2024-07-13 07:11:12.064731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.769 [2024-07-13 07:11:12.064755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.064794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.064827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.064859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.064901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.064935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.064968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.064987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.770 [2024-07-13 07:11:12.065390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.770 [2024-07-13 07:11:12.065953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.770 [2024-07-13 07:11:12.065971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.065985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.066973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.066994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.771 [2024-07-13 07:11:12.067758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.771 [2024-07-13 07:11:12.067773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.067796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.067810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.067832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.067846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.067882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.067904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.067919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.067941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.067970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.067991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.068977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.068991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.069012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.069026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.069048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.069062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.069083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.069104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.069126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.069140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.069162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.772 [2024-07-13 07:11:12.069176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.772 [2024-07-13 07:11:12.069197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:12.069708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:12.069729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.543899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.544937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.544992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.545013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.773 [2024-07-13 07:11:27.545027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.545063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.773 [2024-07-13 07:11:27.545078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.773 [2024-07-13 07:11:27.545098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.773 [2024-07-13 07:11:27.545112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.545132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.545146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.545177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.545192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.545908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.545991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.546707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.546973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.546986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.547199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.547770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.774 [2024-07-13 07:11:27.547812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.547969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.547988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.774 [2024-07-13 07:11:27.548002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.774 [2024-07-13 07:11:27.548021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.548805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.548825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.548839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.549779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.549820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.549864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.549909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.549976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.549996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.550140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.775 [2024-07-13 07:11:27.550173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.775 [2024-07-13 07:11:27.550354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.775 [2024-07-13 07:11:27.550374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.550387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.550435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.550813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.550852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.550885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.550919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.550952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.550971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.550985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.551017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.551050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.551083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.551129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.551354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.551368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.552897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.552962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.553005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.553037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.553070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.553102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.553133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.776 [2024-07-13 07:11:27.553166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.776 [2024-07-13 07:11:27.553198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.776 [2024-07-13 07:11:27.553216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.553229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.553248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.553262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.554775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.554801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.554826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.554849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.554877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.554891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.554910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.554937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.554958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.554972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.554991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.555727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.777 [2024-07-13 07:11:27.555759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.777 [2024-07-13 07:11:27.556488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.777 [2024-07-13 07:11:27.556508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.556521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.557914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.557933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.560460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.560496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.778 [2024-07-13 07:11:27.560567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.778 [2024-07-13 07:11:27.560641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.778 [2024-07-13 07:11:27.560655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.560676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.560703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.560725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.560741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.560762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.560777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.560798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.560813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.562477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.562523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.562560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.562613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.562968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.562982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.563199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.563213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.779 [2024-07-13 07:11:27.564826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.779 [2024-07-13 07:11:27.564847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.779 [2024-07-13 07:11:27.564862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.564882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.564921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.564985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.565147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.565180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.565354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.565386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.565421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.565505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.565519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.567753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.567789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.567925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.567993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.568231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.568263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.568300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.568332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.568364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.780 [2024-07-13 07:11:27.568395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.780 [2024-07-13 07:11:27.568428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.780 [2024-07-13 07:11:27.568446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.568459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.568478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.568491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.568509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.568523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.568541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.568555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.568615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.568641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.568664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.568679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.568700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.568716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.570749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.570778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.570805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.570821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.570843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.570859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.570935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.570949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.570968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.570982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.781 [2024-07-13 07:11:27.571746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.571803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.571818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.572499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.781 [2024-07-13 07:11:27.572524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.781 [2024-07-13 07:11:27.572547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.572580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.572617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.572668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.572703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.572750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.572787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.572821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.572855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.572905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.572968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.572987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.573233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.573264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.573297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.573328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.573922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.573976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.782 [2024-07-13 07:11:27.574254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.574286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.782 [2024-07-13 07:11:27.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.782 [2024-07-13 07:11:27.574318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.574383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.574609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.574644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.574689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.574725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.574901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.574932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.574945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.575529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.575674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.575784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.575820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.575939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.575953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.576064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.576096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.576967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.576980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.577013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.577045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.577077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.577109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.577147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.783 [2024-07-13 07:11:27.577182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.577214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.783 [2024-07-13 07:11:27.577246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.783 [2024-07-13 07:11:27.577266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.577280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.577767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.577807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.577841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.577874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.577907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.577954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.577973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.577986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.578006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.578019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.578049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.578070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.579908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.579927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.579956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.580760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.580785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.580811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.784 [2024-07-13 07:11:27.580827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.580846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.580861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.580891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.580908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.580941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.580955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.580974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.580987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.581006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.581020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.581038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.581051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.581070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.581084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.581346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.581378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.581403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.581419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.784 [2024-07-13 07:11:27.581438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.784 [2024-07-13 07:11:27.581453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.581485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.581680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.581783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.581818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.581935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.581968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.581983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.582016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.582476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.582512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.582545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.582608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.582643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.582663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.582678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.585758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.585800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.585834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.585868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.585900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.585949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.585968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.585982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.586012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.785 [2024-07-13 07:11:27.586027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.586046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.586060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.586078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.586092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.586111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.586125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.586144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.586157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.785 [2024-07-13 07:11:27.586176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.785 [2024-07-13 07:11:27.586189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.586223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.586287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.586319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.586352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.586384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.586651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.586666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.587966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.587991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.588010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.588024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.588042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.588056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.588919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.588967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.589008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.786 [2024-07-13 07:11:27.589024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.589042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.589075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.589089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.786 [2024-07-13 07:11:27.589108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.786 [2024-07-13 07:11:27.589122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.589730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.589963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.589977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.591668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.591899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.591947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.787 [2024-07-13 07:11:27.591980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.591999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.592022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.592042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.787 [2024-07-13 07:11:27.592056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.787 [2024-07-13 07:11:27.592074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.592089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.592153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.592186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.592217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.592250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.592283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.592316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.592335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.592348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.594465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.594501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.594533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.594578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.594611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.594642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.594980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.594999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.788 [2024-07-13 07:11:27.595483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.788 [2024-07-13 07:11:27.595501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.788 [2024-07-13 07:11:27.595514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.595533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.595546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.596776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.596978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.596997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.597011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.597784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.789 [2024-07-13 07:11:27.597896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.597928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.597960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.597979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.789 [2024-07-13 07:11:27.597992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.789 [2024-07-13 07:11:27.598011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.790 [2024-07-13 07:11:27.598227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.790 [2024-07-13 07:11:27.598327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.790 [2024-07-13 07:11:27.598359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.790 [2024-07-13 07:11:27.598391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.598471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.598485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.599587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.599619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.599645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.790 [2024-07-13 07:11:27.599661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.599680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.790 [2024-07-13 07:11:27.599694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.599714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.599727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.599747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.599761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.790 [2024-07-13 07:11:27.599792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.790 [2024-07-13 07:11:27.599807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.790 Received shutdown signal, test time was about 32.445534 seconds 00:25:22.790 00:25:22.790 Latency(us) 00:25:22.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.790 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:22.790 Verification LBA range: start 0x0 length 0x4000 00:25:22.790 Nvme0n1 : 32.44 9872.78 38.57 0.00 0.00 12941.50 618.12 4087539.90 00:25:22.790 =================================================================================================================== 00:25:22.790 Total : 9872.78 38.57 0.00 0.00 12941.50 618.12 4087539.90 00:25:22.790 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.790 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:22.790 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:22.790 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:22.790 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.790 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:23.049 rmmod nvme_tcp 00:25:23.049 rmmod nvme_fabrics 00:25:23.049 rmmod nvme_keyring 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 108051 ']' 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 108051 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 108051 ']' 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 108051 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 108051 00:25:23.049 killing process with pid 108051 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 108051' 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 108051 00:25:23.049 07:11:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 108051 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:23.308 ************************************ 00:25:23.308 END TEST nvmf_host_multipath_status 00:25:23.308 ************************************ 00:25:23.308 00:25:23.308 real 0m38.380s 00:25:23.308 user 2m4.352s 00:25:23.308 sys 0m9.776s 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.308 07:11:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:23.308 07:11:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:23.308 07:11:31 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:23.308 07:11:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:23.308 07:11:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.308 07:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.308 ************************************ 00:25:23.308 START TEST nvmf_discovery_remove_ifc 00:25:23.308 ************************************ 00:25:23.308 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:23.567 * Looking for test storage... 00:25:23.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:23.567 Cannot find device "nvmf_tgt_br" 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:23.567 Cannot find device "nvmf_tgt_br2" 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:23.567 Cannot find device "nvmf_tgt_br" 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:23.567 Cannot find device "nvmf_tgt_br2" 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:23.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:23.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:23.567 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:23.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:25:23.827 00:25:23.827 --- 10.0.0.2 ping statistics --- 00:25:23.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.827 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:23.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:23.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:25:23.827 00:25:23.827 --- 10.0.0.3 ping statistics --- 00:25:23.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.827 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:23.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:23.827 00:25:23.827 --- 10.0.0.1 ping statistics --- 00:25:23.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.827 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109433 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109433 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109433 ']' 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.827 07:11:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.827 [2024-07-13 07:11:31.858620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:23.827 [2024-07-13 07:11:31.858732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.086 [2024-07-13 07:11:32.002543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.086 [2024-07-13 07:11:32.101096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.086 [2024-07-13 07:11:32.101160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.086 [2024-07-13 07:11:32.101174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.086 [2024-07-13 07:11:32.101185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.086 [2024-07-13 07:11:32.101194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.086 [2024-07-13 07:11:32.101225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.019 [2024-07-13 07:11:32.898971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.019 [2024-07-13 07:11:32.907058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:25.019 null0 00:25:25.019 [2024-07-13 07:11:32.938974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109483 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109483 /tmp/host.sock 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 109483 ']' 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.019 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.019 07:11:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.019 [2024-07-13 07:11:33.020120] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:25.019 [2024-07-13 07:11:33.020218] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109483 ] 00:25:25.277 [2024-07-13 07:11:33.160530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.277 [2024-07-13 07:11:33.265358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.212 07:11:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.212 07:11:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.212 07:11:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:26.212 07:11:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.212 07:11:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.149 [2024-07-13 07:11:35.082119] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:27.149 [2024-07-13 07:11:35.082152] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:27.149 [2024-07-13 07:11:35.082169] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.149 [2024-07-13 07:11:35.168221] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:27.422 [2024-07-13 07:11:35.225318] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:27.422 [2024-07-13 07:11:35.225381] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:27.422 [2024-07-13 07:11:35.225411] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:27.422 [2024-07-13 07:11:35.225427] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:27.422 [2024-07-13 07:11:35.225465] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.422 [2024-07-13 07:11:35.230663] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2019210 was disconnected and freed. delete nvme_qpair. 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:27.422 07:11:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.368 07:11:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.744 07:11:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:30.677 07:11:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:31.617 07:11:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.553 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.812 [2024-07-13 07:11:40.653180] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:32.812 [2024-07-13 07:11:40.653281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.812 [2024-07-13 07:11:40.653297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.812 [2024-07-13 07:11:40.653310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.812 [2024-07-13 07:11:40.653320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.812 [2024-07-13 07:11:40.653330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.812 [2024-07-13 07:11:40.653337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.812 [2024-07-13 07:11:40.653346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.812 [2024-07-13 07:11:40.653354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.812 [2024-07-13 07:11:40.653364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.812 [2024-07-13 07:11:40.653373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.812 [2024-07-13 07:11:40.653381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdff90 is same with the state(5) to be set 00:25:32.812 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:32.812 07:11:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:32.812 [2024-07-13 07:11:40.663173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdff90 (9): Bad file descriptor 00:25:32.812 [2024-07-13 07:11:40.673195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.746 [2024-07-13 07:11:41.728687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:33.746 [2024-07-13 07:11:41.728857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fdff90 with addr=10.0.0.2, port=4420 00:25:33.746 [2024-07-13 07:11:41.728893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdff90 is same with the state(5) to be set 00:25:33.746 [2024-07-13 07:11:41.729002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdff90 (9): Bad file descriptor 00:25:33.746 [2024-07-13 07:11:41.729963] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:33.746 [2024-07-13 07:11:41.730051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.746 [2024-07-13 07:11:41.730078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.746 [2024-07-13 07:11:41.730100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.746 [2024-07-13 07:11:41.730166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.746 [2024-07-13 07:11:41.730191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:33.746 07:11:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.682 [2024-07-13 07:11:42.730277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:34.682 [2024-07-13 07:11:42.730396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:34.682 [2024-07-13 07:11:42.730425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:34.682 [2024-07-13 07:11:42.730437] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:34.682 [2024-07-13 07:11:42.730466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.682 [2024-07-13 07:11:42.730502] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:34.682 [2024-07-13 07:11:42.730594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.682 [2024-07-13 07:11:42.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.682 [2024-07-13 07:11:42.730627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.682 [2024-07-13 07:11:42.730636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.682 [2024-07-13 07:11:42.730645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.682 [2024-07-13 07:11:42.730654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.682 [2024-07-13 07:11:42.730663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.682 [2024-07-13 07:11:42.730672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.682 [2024-07-13 07:11:42.730683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.682 [2024-07-13 07:11:42.730691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.682 [2024-07-13 07:11:42.730708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:34.682 [2024-07-13 07:11:42.730935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdf410 (9): Bad file descriptor 00:25:34.682 [2024-07-13 07:11:42.731945] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:34.682 [2024-07-13 07:11:42.731968] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:34.941 07:11:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.875 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.134 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:36.134 07:11:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:36.702 [2024-07-13 07:11:44.743691] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:36.702 [2024-07-13 07:11:44.743748] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:36.702 [2024-07-13 07:11:44.743766] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:36.961 [2024-07-13 07:11:44.829866] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:36.961 [2024-07-13 07:11:44.886291] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:36.961 [2024-07-13 07:11:44.886372] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:36.961 [2024-07-13 07:11:44.886398] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:36.961 [2024-07-13 07:11:44.886440] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:36.961 [2024-07-13 07:11:44.886460] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:36.961 [2024-07-13 07:11:44.892180] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1fcf6d0 was disconnected and freed. delete nvme_qpair. 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.961 07:11:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109483 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109483 ']' 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109483 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.961 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109483 00:25:37.219 killing process with pid 109483 00:25:37.219 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:37.219 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:37.219 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109483' 00:25:37.219 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109483 00:25:37.219 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109483 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.478 rmmod nvme_tcp 00:25:37.478 rmmod nvme_fabrics 00:25:37.478 rmmod nvme_keyring 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109433 ']' 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109433 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 109433 ']' 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 109433 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 109433 00:25:37.478 killing process with pid 109433 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 109433' 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 109433 00:25:37.478 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 109433 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:37.737 00:25:37.737 real 0m14.379s 00:25:37.737 user 0m25.635s 00:25:37.737 sys 0m1.742s 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.737 07:11:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.737 ************************************ 00:25:37.737 END TEST nvmf_discovery_remove_ifc 00:25:37.737 ************************************ 00:25:37.737 07:11:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:37.737 07:11:45 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:37.737 07:11:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:37.737 07:11:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.737 07:11:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.737 ************************************ 00:25:37.737 START TEST nvmf_identify_kernel_target 00:25:37.737 ************************************ 00:25:37.737 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:37.996 * Looking for test storage... 00:25:37.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:37.996 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:37.997 Cannot find device "nvmf_tgt_br" 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:37.997 Cannot find device "nvmf_tgt_br2" 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:37.997 Cannot find device "nvmf_tgt_br" 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:37.997 Cannot find device "nvmf_tgt_br2" 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:37.997 07:11:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:37.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:37.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:37.997 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:38.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:25:38.255 00:25:38.255 --- 10.0.0.2 ping statistics --- 00:25:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.255 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:38.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:38.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:25:38.255 00:25:38.255 --- 10.0.0.3 ping statistics --- 00:25:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.255 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:38.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:38.255 00:25:38.255 --- 10.0.0.1 ping statistics --- 00:25:38.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.255 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:38.255 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:38.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:38.822 Waiting for block devices as requested 00:25:38.822 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:38.822 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:38.822 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:39.082 No valid GPT data, bailing 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:39.082 No valid GPT data, bailing 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:39.082 07:11:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:39.082 No valid GPT data, bailing 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:39.082 No valid GPT data, bailing 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.082 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -a 10.0.0.1 -t tcp -s 4420 00:25:39.342 00:25:39.342 Discovery Log Number of Records 2, Generation counter 2 00:25:39.342 =====Discovery Log Entry 0====== 00:25:39.342 trtype: tcp 00:25:39.342 adrfam: ipv4 00:25:39.342 subtype: current discovery subsystem 00:25:39.342 treq: not specified, sq flow control disable supported 00:25:39.342 portid: 1 00:25:39.342 trsvcid: 4420 00:25:39.342 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:39.342 traddr: 10.0.0.1 00:25:39.342 eflags: none 00:25:39.342 sectype: none 00:25:39.342 =====Discovery Log Entry 1====== 00:25:39.342 trtype: tcp 00:25:39.342 adrfam: ipv4 00:25:39.342 subtype: nvme subsystem 00:25:39.342 treq: not specified, sq flow control disable supported 00:25:39.342 portid: 1 00:25:39.342 trsvcid: 4420 00:25:39.342 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:39.342 traddr: 10.0.0.1 00:25:39.342 eflags: none 00:25:39.342 sectype: none 00:25:39.342 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:39.342 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:39.342 ===================================================== 00:25:39.342 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:39.342 ===================================================== 00:25:39.342 Controller Capabilities/Features 00:25:39.342 ================================ 00:25:39.342 Vendor ID: 0000 00:25:39.342 Subsystem Vendor ID: 0000 00:25:39.342 Serial Number: f5fbe286ab557b39c0de 00:25:39.342 Model Number: Linux 00:25:39.342 Firmware Version: 6.7.0-68 00:25:39.342 Recommended Arb Burst: 0 00:25:39.342 IEEE OUI Identifier: 00 00 00 00:25:39.342 Multi-path I/O 00:25:39.342 May have multiple subsystem ports: No 00:25:39.342 May have multiple controllers: No 00:25:39.342 Associated with SR-IOV VF: No 00:25:39.342 Max Data Transfer Size: Unlimited 00:25:39.342 Max Number of Namespaces: 0 00:25:39.342 Max Number of I/O Queues: 1024 00:25:39.342 NVMe Specification Version (VS): 1.3 00:25:39.342 NVMe Specification Version (Identify): 1.3 00:25:39.342 Maximum Queue Entries: 1024 00:25:39.342 Contiguous Queues Required: No 00:25:39.342 Arbitration Mechanisms Supported 00:25:39.342 Weighted Round Robin: Not Supported 00:25:39.342 Vendor Specific: Not Supported 00:25:39.342 Reset Timeout: 7500 ms 00:25:39.342 Doorbell Stride: 4 bytes 00:25:39.342 NVM Subsystem Reset: Not Supported 00:25:39.342 Command Sets Supported 00:25:39.342 NVM Command Set: Supported 00:25:39.342 Boot Partition: Not Supported 00:25:39.342 Memory Page Size Minimum: 4096 bytes 00:25:39.342 Memory Page Size Maximum: 4096 bytes 00:25:39.342 Persistent Memory Region: Not Supported 00:25:39.342 Optional Asynchronous Events Supported 00:25:39.342 Namespace Attribute Notices: Not Supported 00:25:39.342 Firmware Activation Notices: Not Supported 00:25:39.342 ANA Change Notices: Not Supported 00:25:39.342 PLE Aggregate Log Change Notices: Not Supported 00:25:39.342 LBA Status Info Alert Notices: Not Supported 00:25:39.342 EGE Aggregate Log Change Notices: Not Supported 00:25:39.342 Normal NVM Subsystem Shutdown event: Not Supported 00:25:39.342 Zone Descriptor Change Notices: Not Supported 00:25:39.342 Discovery Log Change Notices: Supported 00:25:39.342 Controller Attributes 00:25:39.342 128-bit Host Identifier: Not Supported 00:25:39.342 Non-Operational Permissive Mode: Not Supported 00:25:39.342 NVM Sets: Not Supported 00:25:39.342 Read Recovery Levels: Not Supported 00:25:39.342 Endurance Groups: Not Supported 00:25:39.342 Predictable Latency Mode: Not Supported 00:25:39.342 Traffic Based Keep ALive: Not Supported 00:25:39.342 Namespace Granularity: Not Supported 00:25:39.342 SQ Associations: Not Supported 00:25:39.342 UUID List: Not Supported 00:25:39.342 Multi-Domain Subsystem: Not Supported 00:25:39.342 Fixed Capacity Management: Not Supported 00:25:39.342 Variable Capacity Management: Not Supported 00:25:39.343 Delete Endurance Group: Not Supported 00:25:39.343 Delete NVM Set: Not Supported 00:25:39.343 Extended LBA Formats Supported: Not Supported 00:25:39.343 Flexible Data Placement Supported: Not Supported 00:25:39.343 00:25:39.343 Controller Memory Buffer Support 00:25:39.343 ================================ 00:25:39.343 Supported: No 00:25:39.343 00:25:39.343 Persistent Memory Region Support 00:25:39.343 ================================ 00:25:39.343 Supported: No 00:25:39.343 00:25:39.343 Admin Command Set Attributes 00:25:39.343 ============================ 00:25:39.343 Security Send/Receive: Not Supported 00:25:39.343 Format NVM: Not Supported 00:25:39.343 Firmware Activate/Download: Not Supported 00:25:39.343 Namespace Management: Not Supported 00:25:39.343 Device Self-Test: Not Supported 00:25:39.343 Directives: Not Supported 00:25:39.343 NVMe-MI: Not Supported 00:25:39.343 Virtualization Management: Not Supported 00:25:39.343 Doorbell Buffer Config: Not Supported 00:25:39.343 Get LBA Status Capability: Not Supported 00:25:39.343 Command & Feature Lockdown Capability: Not Supported 00:25:39.343 Abort Command Limit: 1 00:25:39.343 Async Event Request Limit: 1 00:25:39.343 Number of Firmware Slots: N/A 00:25:39.343 Firmware Slot 1 Read-Only: N/A 00:25:39.343 Firmware Activation Without Reset: N/A 00:25:39.343 Multiple Update Detection Support: N/A 00:25:39.343 Firmware Update Granularity: No Information Provided 00:25:39.343 Per-Namespace SMART Log: No 00:25:39.343 Asymmetric Namespace Access Log Page: Not Supported 00:25:39.343 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:39.343 Command Effects Log Page: Not Supported 00:25:39.343 Get Log Page Extended Data: Supported 00:25:39.343 Telemetry Log Pages: Not Supported 00:25:39.343 Persistent Event Log Pages: Not Supported 00:25:39.343 Supported Log Pages Log Page: May Support 00:25:39.343 Commands Supported & Effects Log Page: Not Supported 00:25:39.343 Feature Identifiers & Effects Log Page:May Support 00:25:39.343 NVMe-MI Commands & Effects Log Page: May Support 00:25:39.343 Data Area 4 for Telemetry Log: Not Supported 00:25:39.343 Error Log Page Entries Supported: 1 00:25:39.343 Keep Alive: Not Supported 00:25:39.343 00:25:39.343 NVM Command Set Attributes 00:25:39.343 ========================== 00:25:39.343 Submission Queue Entry Size 00:25:39.343 Max: 1 00:25:39.343 Min: 1 00:25:39.343 Completion Queue Entry Size 00:25:39.343 Max: 1 00:25:39.343 Min: 1 00:25:39.343 Number of Namespaces: 0 00:25:39.343 Compare Command: Not Supported 00:25:39.343 Write Uncorrectable Command: Not Supported 00:25:39.343 Dataset Management Command: Not Supported 00:25:39.343 Write Zeroes Command: Not Supported 00:25:39.343 Set Features Save Field: Not Supported 00:25:39.343 Reservations: Not Supported 00:25:39.343 Timestamp: Not Supported 00:25:39.343 Copy: Not Supported 00:25:39.343 Volatile Write Cache: Not Present 00:25:39.343 Atomic Write Unit (Normal): 1 00:25:39.343 Atomic Write Unit (PFail): 1 00:25:39.343 Atomic Compare & Write Unit: 1 00:25:39.343 Fused Compare & Write: Not Supported 00:25:39.343 Scatter-Gather List 00:25:39.343 SGL Command Set: Supported 00:25:39.343 SGL Keyed: Not Supported 00:25:39.343 SGL Bit Bucket Descriptor: Not Supported 00:25:39.343 SGL Metadata Pointer: Not Supported 00:25:39.343 Oversized SGL: Not Supported 00:25:39.343 SGL Metadata Address: Not Supported 00:25:39.343 SGL Offset: Supported 00:25:39.343 Transport SGL Data Block: Not Supported 00:25:39.343 Replay Protected Memory Block: Not Supported 00:25:39.343 00:25:39.343 Firmware Slot Information 00:25:39.343 ========================= 00:25:39.343 Active slot: 0 00:25:39.343 00:25:39.343 00:25:39.343 Error Log 00:25:39.343 ========= 00:25:39.343 00:25:39.343 Active Namespaces 00:25:39.343 ================= 00:25:39.343 Discovery Log Page 00:25:39.343 ================== 00:25:39.343 Generation Counter: 2 00:25:39.343 Number of Records: 2 00:25:39.343 Record Format: 0 00:25:39.343 00:25:39.343 Discovery Log Entry 0 00:25:39.343 ---------------------- 00:25:39.343 Transport Type: 3 (TCP) 00:25:39.343 Address Family: 1 (IPv4) 00:25:39.343 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:39.343 Entry Flags: 00:25:39.343 Duplicate Returned Information: 0 00:25:39.343 Explicit Persistent Connection Support for Discovery: 0 00:25:39.343 Transport Requirements: 00:25:39.343 Secure Channel: Not Specified 00:25:39.343 Port ID: 1 (0x0001) 00:25:39.343 Controller ID: 65535 (0xffff) 00:25:39.343 Admin Max SQ Size: 32 00:25:39.343 Transport Service Identifier: 4420 00:25:39.343 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:39.343 Transport Address: 10.0.0.1 00:25:39.343 Discovery Log Entry 1 00:25:39.343 ---------------------- 00:25:39.343 Transport Type: 3 (TCP) 00:25:39.343 Address Family: 1 (IPv4) 00:25:39.343 Subsystem Type: 2 (NVM Subsystem) 00:25:39.343 Entry Flags: 00:25:39.343 Duplicate Returned Information: 0 00:25:39.343 Explicit Persistent Connection Support for Discovery: 0 00:25:39.343 Transport Requirements: 00:25:39.343 Secure Channel: Not Specified 00:25:39.343 Port ID: 1 (0x0001) 00:25:39.343 Controller ID: 65535 (0xffff) 00:25:39.343 Admin Max SQ Size: 32 00:25:39.343 Transport Service Identifier: 4420 00:25:39.343 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:39.343 Transport Address: 10.0.0.1 00:25:39.343 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:39.603 get_feature(0x01) failed 00:25:39.603 get_feature(0x02) failed 00:25:39.603 get_feature(0x04) failed 00:25:39.603 ===================================================== 00:25:39.603 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:39.603 ===================================================== 00:25:39.603 Controller Capabilities/Features 00:25:39.603 ================================ 00:25:39.603 Vendor ID: 0000 00:25:39.603 Subsystem Vendor ID: 0000 00:25:39.603 Serial Number: 6a5643a76424ee34ad71 00:25:39.603 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:39.603 Firmware Version: 6.7.0-68 00:25:39.603 Recommended Arb Burst: 6 00:25:39.603 IEEE OUI Identifier: 00 00 00 00:25:39.603 Multi-path I/O 00:25:39.603 May have multiple subsystem ports: Yes 00:25:39.603 May have multiple controllers: Yes 00:25:39.603 Associated with SR-IOV VF: No 00:25:39.603 Max Data Transfer Size: Unlimited 00:25:39.603 Max Number of Namespaces: 1024 00:25:39.603 Max Number of I/O Queues: 128 00:25:39.603 NVMe Specification Version (VS): 1.3 00:25:39.603 NVMe Specification Version (Identify): 1.3 00:25:39.603 Maximum Queue Entries: 1024 00:25:39.603 Contiguous Queues Required: No 00:25:39.603 Arbitration Mechanisms Supported 00:25:39.603 Weighted Round Robin: Not Supported 00:25:39.603 Vendor Specific: Not Supported 00:25:39.603 Reset Timeout: 7500 ms 00:25:39.603 Doorbell Stride: 4 bytes 00:25:39.603 NVM Subsystem Reset: Not Supported 00:25:39.603 Command Sets Supported 00:25:39.603 NVM Command Set: Supported 00:25:39.603 Boot Partition: Not Supported 00:25:39.603 Memory Page Size Minimum: 4096 bytes 00:25:39.603 Memory Page Size Maximum: 4096 bytes 00:25:39.603 Persistent Memory Region: Not Supported 00:25:39.603 Optional Asynchronous Events Supported 00:25:39.603 Namespace Attribute Notices: Supported 00:25:39.603 Firmware Activation Notices: Not Supported 00:25:39.603 ANA Change Notices: Supported 00:25:39.603 PLE Aggregate Log Change Notices: Not Supported 00:25:39.603 LBA Status Info Alert Notices: Not Supported 00:25:39.603 EGE Aggregate Log Change Notices: Not Supported 00:25:39.603 Normal NVM Subsystem Shutdown event: Not Supported 00:25:39.603 Zone Descriptor Change Notices: Not Supported 00:25:39.603 Discovery Log Change Notices: Not Supported 00:25:39.603 Controller Attributes 00:25:39.603 128-bit Host Identifier: Supported 00:25:39.603 Non-Operational Permissive Mode: Not Supported 00:25:39.603 NVM Sets: Not Supported 00:25:39.603 Read Recovery Levels: Not Supported 00:25:39.603 Endurance Groups: Not Supported 00:25:39.603 Predictable Latency Mode: Not Supported 00:25:39.603 Traffic Based Keep ALive: Supported 00:25:39.603 Namespace Granularity: Not Supported 00:25:39.603 SQ Associations: Not Supported 00:25:39.603 UUID List: Not Supported 00:25:39.603 Multi-Domain Subsystem: Not Supported 00:25:39.603 Fixed Capacity Management: Not Supported 00:25:39.603 Variable Capacity Management: Not Supported 00:25:39.603 Delete Endurance Group: Not Supported 00:25:39.603 Delete NVM Set: Not Supported 00:25:39.603 Extended LBA Formats Supported: Not Supported 00:25:39.603 Flexible Data Placement Supported: Not Supported 00:25:39.603 00:25:39.603 Controller Memory Buffer Support 00:25:39.603 ================================ 00:25:39.603 Supported: No 00:25:39.603 00:25:39.603 Persistent Memory Region Support 00:25:39.603 ================================ 00:25:39.603 Supported: No 00:25:39.603 00:25:39.603 Admin Command Set Attributes 00:25:39.603 ============================ 00:25:39.603 Security Send/Receive: Not Supported 00:25:39.604 Format NVM: Not Supported 00:25:39.604 Firmware Activate/Download: Not Supported 00:25:39.604 Namespace Management: Not Supported 00:25:39.604 Device Self-Test: Not Supported 00:25:39.604 Directives: Not Supported 00:25:39.604 NVMe-MI: Not Supported 00:25:39.604 Virtualization Management: Not Supported 00:25:39.604 Doorbell Buffer Config: Not Supported 00:25:39.604 Get LBA Status Capability: Not Supported 00:25:39.604 Command & Feature Lockdown Capability: Not Supported 00:25:39.604 Abort Command Limit: 4 00:25:39.604 Async Event Request Limit: 4 00:25:39.604 Number of Firmware Slots: N/A 00:25:39.604 Firmware Slot 1 Read-Only: N/A 00:25:39.604 Firmware Activation Without Reset: N/A 00:25:39.604 Multiple Update Detection Support: N/A 00:25:39.604 Firmware Update Granularity: No Information Provided 00:25:39.604 Per-Namespace SMART Log: Yes 00:25:39.604 Asymmetric Namespace Access Log Page: Supported 00:25:39.604 ANA Transition Time : 10 sec 00:25:39.604 00:25:39.604 Asymmetric Namespace Access Capabilities 00:25:39.604 ANA Optimized State : Supported 00:25:39.604 ANA Non-Optimized State : Supported 00:25:39.604 ANA Inaccessible State : Supported 00:25:39.604 ANA Persistent Loss State : Supported 00:25:39.604 ANA Change State : Supported 00:25:39.604 ANAGRPID is not changed : No 00:25:39.604 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:39.604 00:25:39.604 ANA Group Identifier Maximum : 128 00:25:39.604 Number of ANA Group Identifiers : 128 00:25:39.604 Max Number of Allowed Namespaces : 1024 00:25:39.604 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:39.604 Command Effects Log Page: Supported 00:25:39.604 Get Log Page Extended Data: Supported 00:25:39.604 Telemetry Log Pages: Not Supported 00:25:39.604 Persistent Event Log Pages: Not Supported 00:25:39.604 Supported Log Pages Log Page: May Support 00:25:39.604 Commands Supported & Effects Log Page: Not Supported 00:25:39.604 Feature Identifiers & Effects Log Page:May Support 00:25:39.604 NVMe-MI Commands & Effects Log Page: May Support 00:25:39.604 Data Area 4 for Telemetry Log: Not Supported 00:25:39.604 Error Log Page Entries Supported: 128 00:25:39.604 Keep Alive: Supported 00:25:39.604 Keep Alive Granularity: 1000 ms 00:25:39.604 00:25:39.604 NVM Command Set Attributes 00:25:39.604 ========================== 00:25:39.604 Submission Queue Entry Size 00:25:39.604 Max: 64 00:25:39.604 Min: 64 00:25:39.604 Completion Queue Entry Size 00:25:39.604 Max: 16 00:25:39.604 Min: 16 00:25:39.604 Number of Namespaces: 1024 00:25:39.604 Compare Command: Not Supported 00:25:39.604 Write Uncorrectable Command: Not Supported 00:25:39.604 Dataset Management Command: Supported 00:25:39.604 Write Zeroes Command: Supported 00:25:39.604 Set Features Save Field: Not Supported 00:25:39.604 Reservations: Not Supported 00:25:39.604 Timestamp: Not Supported 00:25:39.604 Copy: Not Supported 00:25:39.604 Volatile Write Cache: Present 00:25:39.604 Atomic Write Unit (Normal): 1 00:25:39.604 Atomic Write Unit (PFail): 1 00:25:39.604 Atomic Compare & Write Unit: 1 00:25:39.604 Fused Compare & Write: Not Supported 00:25:39.604 Scatter-Gather List 00:25:39.604 SGL Command Set: Supported 00:25:39.604 SGL Keyed: Not Supported 00:25:39.604 SGL Bit Bucket Descriptor: Not Supported 00:25:39.604 SGL Metadata Pointer: Not Supported 00:25:39.604 Oversized SGL: Not Supported 00:25:39.604 SGL Metadata Address: Not Supported 00:25:39.604 SGL Offset: Supported 00:25:39.604 Transport SGL Data Block: Not Supported 00:25:39.604 Replay Protected Memory Block: Not Supported 00:25:39.604 00:25:39.604 Firmware Slot Information 00:25:39.604 ========================= 00:25:39.604 Active slot: 0 00:25:39.604 00:25:39.604 Asymmetric Namespace Access 00:25:39.604 =========================== 00:25:39.604 Change Count : 0 00:25:39.604 Number of ANA Group Descriptors : 1 00:25:39.604 ANA Group Descriptor : 0 00:25:39.604 ANA Group ID : 1 00:25:39.604 Number of NSID Values : 1 00:25:39.604 Change Count : 0 00:25:39.604 ANA State : 1 00:25:39.604 Namespace Identifier : 1 00:25:39.604 00:25:39.604 Commands Supported and Effects 00:25:39.604 ============================== 00:25:39.604 Admin Commands 00:25:39.604 -------------- 00:25:39.604 Get Log Page (02h): Supported 00:25:39.604 Identify (06h): Supported 00:25:39.604 Abort (08h): Supported 00:25:39.604 Set Features (09h): Supported 00:25:39.604 Get Features (0Ah): Supported 00:25:39.604 Asynchronous Event Request (0Ch): Supported 00:25:39.604 Keep Alive (18h): Supported 00:25:39.604 I/O Commands 00:25:39.604 ------------ 00:25:39.604 Flush (00h): Supported 00:25:39.604 Write (01h): Supported LBA-Change 00:25:39.604 Read (02h): Supported 00:25:39.604 Write Zeroes (08h): Supported LBA-Change 00:25:39.604 Dataset Management (09h): Supported 00:25:39.604 00:25:39.604 Error Log 00:25:39.604 ========= 00:25:39.604 Entry: 0 00:25:39.604 Error Count: 0x3 00:25:39.604 Submission Queue Id: 0x0 00:25:39.604 Command Id: 0x5 00:25:39.604 Phase Bit: 0 00:25:39.604 Status Code: 0x2 00:25:39.604 Status Code Type: 0x0 00:25:39.604 Do Not Retry: 1 00:25:39.604 Error Location: 0x28 00:25:39.604 LBA: 0x0 00:25:39.604 Namespace: 0x0 00:25:39.604 Vendor Log Page: 0x0 00:25:39.604 ----------- 00:25:39.604 Entry: 1 00:25:39.604 Error Count: 0x2 00:25:39.604 Submission Queue Id: 0x0 00:25:39.604 Command Id: 0x5 00:25:39.604 Phase Bit: 0 00:25:39.604 Status Code: 0x2 00:25:39.604 Status Code Type: 0x0 00:25:39.604 Do Not Retry: 1 00:25:39.604 Error Location: 0x28 00:25:39.604 LBA: 0x0 00:25:39.604 Namespace: 0x0 00:25:39.604 Vendor Log Page: 0x0 00:25:39.604 ----------- 00:25:39.604 Entry: 2 00:25:39.604 Error Count: 0x1 00:25:39.604 Submission Queue Id: 0x0 00:25:39.604 Command Id: 0x4 00:25:39.604 Phase Bit: 0 00:25:39.604 Status Code: 0x2 00:25:39.604 Status Code Type: 0x0 00:25:39.604 Do Not Retry: 1 00:25:39.604 Error Location: 0x28 00:25:39.604 LBA: 0x0 00:25:39.604 Namespace: 0x0 00:25:39.604 Vendor Log Page: 0x0 00:25:39.604 00:25:39.604 Number of Queues 00:25:39.604 ================ 00:25:39.604 Number of I/O Submission Queues: 128 00:25:39.604 Number of I/O Completion Queues: 128 00:25:39.604 00:25:39.604 ZNS Specific Controller Data 00:25:39.604 ============================ 00:25:39.604 Zone Append Size Limit: 0 00:25:39.604 00:25:39.605 00:25:39.605 Active Namespaces 00:25:39.605 ================= 00:25:39.605 get_feature(0x05) failed 00:25:39.605 Namespace ID:1 00:25:39.605 Command Set Identifier: NVM (00h) 00:25:39.605 Deallocate: Supported 00:25:39.605 Deallocated/Unwritten Error: Not Supported 00:25:39.605 Deallocated Read Value: Unknown 00:25:39.605 Deallocate in Write Zeroes: Not Supported 00:25:39.605 Deallocated Guard Field: 0xFFFF 00:25:39.605 Flush: Supported 00:25:39.605 Reservation: Not Supported 00:25:39.605 Namespace Sharing Capabilities: Multiple Controllers 00:25:39.605 Size (in LBAs): 1310720 (5GiB) 00:25:39.605 Capacity (in LBAs): 1310720 (5GiB) 00:25:39.605 Utilization (in LBAs): 1310720 (5GiB) 00:25:39.605 UUID: 7ebed791-5ffd-4f5d-8b53-364a28c8a069 00:25:39.605 Thin Provisioning: Not Supported 00:25:39.605 Per-NS Atomic Units: Yes 00:25:39.605 Atomic Boundary Size (Normal): 0 00:25:39.605 Atomic Boundary Size (PFail): 0 00:25:39.605 Atomic Boundary Offset: 0 00:25:39.605 NGUID/EUI64 Never Reused: No 00:25:39.605 ANA group ID: 1 00:25:39.605 Namespace Write Protected: No 00:25:39.605 Number of LBA Formats: 1 00:25:39.605 Current LBA Format: LBA Format #00 00:25:39.605 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:39.605 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:39.605 rmmod nvme_tcp 00:25:39.605 rmmod nvme_fabrics 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.605 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:39.864 07:11:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:40.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:40.689 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.689 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.689 ************************************ 00:25:40.689 END TEST nvmf_identify_kernel_target 00:25:40.689 ************************************ 00:25:40.689 00:25:40.689 real 0m2.897s 00:25:40.689 user 0m0.998s 00:25:40.689 sys 0m1.383s 00:25:40.689 07:11:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.689 07:11:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.689 07:11:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:40.689 07:11:48 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:40.689 07:11:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:40.689 07:11:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.689 07:11:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.689 ************************************ 00:25:40.689 START TEST nvmf_auth_host 00:25:40.689 ************************************ 00:25:40.689 07:11:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:40.948 * Looking for test storage... 00:25:40.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.948 07:11:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:40.949 Cannot find device "nvmf_tgt_br" 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.949 Cannot find device "nvmf_tgt_br2" 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:40.949 Cannot find device "nvmf_tgt_br" 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:40.949 Cannot find device "nvmf_tgt_br2" 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.949 07:11:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.949 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:41.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:25:41.207 00:25:41.207 --- 10.0.0.2 ping statistics --- 00:25:41.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.207 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:41.207 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:41.207 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:25:41.207 00:25:41.207 --- 10.0.0.3 ping statistics --- 00:25:41.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.207 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:41.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:25:41.207 00:25:41.207 --- 10.0.0.1 ping statistics --- 00:25:41.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.207 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=110379 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 110379 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110379 ']' 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.207 07:11:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b2049ab57e4f3b6f19a1b442492972d 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.p62 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b2049ab57e4f3b6f19a1b442492972d 0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b2049ab57e4f3b6f19a1b442492972d 0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b2049ab57e4f3b6f19a1b442492972d 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.p62 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.p62 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.p62 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e2a5cd3303f714500b3a6114f560b16896a6db2426127a9c4e69f50bf7f1fdc0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dje 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e2a5cd3303f714500b3a6114f560b16896a6db2426127a9c4e69f50bf7f1fdc0 3 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e2a5cd3303f714500b3a6114f560b16896a6db2426127a9c4e69f50bf7f1fdc0 3 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e2a5cd3303f714500b3a6114f560b16896a6db2426127a9c4e69f50bf7f1fdc0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dje 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dje 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dje 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c25f24ade76b9470814042e9d93640811f3dfeec2a01f8c 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.w6v 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c25f24ade76b9470814042e9d93640811f3dfeec2a01f8c 0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c25f24ade76b9470814042e9d93640811f3dfeec2a01f8c 0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c25f24ade76b9470814042e9d93640811f3dfeec2a01f8c 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.w6v 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.w6v 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.w6v 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=11c61d6599ffc9536c072f308e2693c3bee5e0ef1ce17dee 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xGm 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 11c61d6599ffc9536c072f308e2693c3bee5e0ef1ce17dee 2 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 11c61d6599ffc9536c072f308e2693c3bee5e0ef1ce17dee 2 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=11c61d6599ffc9536c072f308e2693c3bee5e0ef1ce17dee 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xGm 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xGm 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xGm 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=94498e64857dcbb35b21ca6ea7d9bb5d 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PmA 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 94498e64857dcbb35b21ca6ea7d9bb5d 1 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 94498e64857dcbb35b21ca6ea7d9bb5d 1 00:25:42.639 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.640 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.640 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=94498e64857dcbb35b21ca6ea7d9bb5d 00:25:42.640 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:42.640 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PmA 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PmA 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PmA 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=05da52f2fcac189af13d98095620ee01 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.E4r 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 05da52f2fcac189af13d98095620ee01 1 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 05da52f2fcac189af13d98095620ee01 1 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=05da52f2fcac189af13d98095620ee01 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.E4r 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.E4r 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.E4r 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7410b12558cca3de6d59a0685ccd2db939b79722d49a9853 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2Qo 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7410b12558cca3de6d59a0685ccd2db939b79722d49a9853 2 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7410b12558cca3de6d59a0685ccd2db939b79722d49a9853 2 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7410b12558cca3de6d59a0685ccd2db939b79722d49a9853 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2Qo 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2Qo 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2Qo 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b3c717b979dfe90d23c8e9ef6f6caabe 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3f9 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b3c717b979dfe90d23c8e9ef6f6caabe 0 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b3c717b979dfe90d23c8e9ef6f6caabe 0 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b3c717b979dfe90d23c8e9ef6f6caabe 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3f9 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3f9 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3f9 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=056a9506eaafa25dd0137600b61dbe1a7852131d4fd460b44d3a892b136292ee 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2HO 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 056a9506eaafa25dd0137600b61dbe1a7852131d4fd460b44d3a892b136292ee 3 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 056a9506eaafa25dd0137600b61dbe1a7852131d4fd460b44d3a892b136292ee 3 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:42.912 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:42.913 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=056a9506eaafa25dd0137600b61dbe1a7852131d4fd460b44d3a892b136292ee 00:25:42.913 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:42.913 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2HO 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2HO 00:25:43.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2HO 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 110379 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 110379 ']' 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.177 07:11:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.p62 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dje ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dje 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.w6v 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xGm ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xGm 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PmA 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.E4r ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.E4r 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2Qo 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3f9 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3f9 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2HO 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:43.436 07:11:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:43.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:43.695 Waiting for block devices as requested 00:25:43.953 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:43.953 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:44.520 No valid GPT data, bailing 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:44.520 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:44.777 No valid GPT data, bailing 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:44.777 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:44.778 No valid GPT data, bailing 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:44.778 No valid GPT data, bailing 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -a 10.0.0.1 -t tcp -s 4420 00:25:44.778 00:25:44.778 Discovery Log Number of Records 2, Generation counter 2 00:25:44.778 =====Discovery Log Entry 0====== 00:25:44.778 trtype: tcp 00:25:44.778 adrfam: ipv4 00:25:44.778 subtype: current discovery subsystem 00:25:44.778 treq: not specified, sq flow control disable supported 00:25:44.778 portid: 1 00:25:44.778 trsvcid: 4420 00:25:44.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:44.778 traddr: 10.0.0.1 00:25:44.778 eflags: none 00:25:44.778 sectype: none 00:25:44.778 =====Discovery Log Entry 1====== 00:25:44.778 trtype: tcp 00:25:44.778 adrfam: ipv4 00:25:44.778 subtype: nvme subsystem 00:25:44.778 treq: not specified, sq flow control disable supported 00:25:44.778 portid: 1 00:25:44.778 trsvcid: 4420 00:25:44.778 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:44.778 traddr: 10.0.0.1 00:25:44.778 eflags: none 00:25:44.778 sectype: none 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:44.778 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.035 07:11:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.035 nvme0n1 00:25:45.035 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.035 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.035 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.035 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.035 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.035 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.294 nvme0n1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.294 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.295 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.553 nvme0n1 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.553 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.554 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 nvme0n1 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 nvme0n1 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.813 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.072 nvme0n1 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.072 07:11:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.072 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.330 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.331 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.589 nvme0n1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.589 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 nvme0n1 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 nvme0n1 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.848 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.849 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.107 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.108 07:11:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.108 nvme0n1 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.108 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 nvme0n1 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:47.367 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.935 07:11:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.195 nvme0n1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.195 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.455 nvme0n1 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.455 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.714 nvme0n1 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.714 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 nvme0n1 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.973 07:11:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.231 nvme0n1 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.231 07:11:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.608 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.867 nvme0n1 00:25:50.867 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.867 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.867 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.867 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.867 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.126 07:11:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.126 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.385 nvme0n1 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.385 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.952 nvme0n1 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.952 07:11:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.211 nvme0n1 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.211 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.470 nvme0n1 00:25:52.470 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.470 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.470 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.470 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.470 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.470 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.729 07:12:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.296 nvme0n1 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.296 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.297 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.865 nvme0n1 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.865 07:12:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.433 nvme0n1 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.433 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.434 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.434 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 nvme0n1 00:25:55.001 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.001 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.001 07:12:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.001 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.001 07:12:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.001 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.258 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.259 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.259 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.259 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.259 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.259 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.524 nvme0n1 00:25:55.524 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 nvme0n1 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.796 07:12:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.797 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.797 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.797 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.055 nvme0n1 00:25:56.056 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.056 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.056 07:12:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.056 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.056 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.056 07:12:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.056 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.056 nvme0n1 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.316 nvme0n1 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.316 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.317 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.576 nvme0n1 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.576 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.835 nvme0n1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.835 nvme0n1 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.835 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.094 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.095 07:12:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.095 nvme0n1 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:57.095 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.354 nvme0n1 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.354 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.613 nvme0n1 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.613 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.872 nvme0n1 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:57.872 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.873 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.130 nvme0n1 00:25:58.130 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.130 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.130 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.130 07:12:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.130 07:12:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.130 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.131 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.131 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.388 nvme0n1 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.388 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.647 nvme0n1 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.647 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.905 nvme0n1 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:58.905 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.906 07:12:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 nvme0n1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.164 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.423 nvme0n1 00:25:59.423 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.423 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.423 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.423 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.682 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.683 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.683 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.683 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.683 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.683 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.683 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.942 nvme0n1 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.942 07:12:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 nvme0n1 00:26:00.201 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.201 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.201 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.201 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.201 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.201 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.460 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.719 nvme0n1 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:26:00.719 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.720 07:12:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.286 nvme0n1 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.286 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.287 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.853 nvme0n1 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.853 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.854 07:12:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.421 nvme0n1 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.421 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.989 nvme0n1 00:26:02.989 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.989 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.989 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.989 07:12:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.989 07:12:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.989 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.248 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.506 nvme0n1 00:26:03.506 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.506 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:03.765 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.766 nvme0n1 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.766 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.024 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.025 nvme0n1 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.025 07:12:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.025 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 nvme0n1 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 nvme0n1 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.543 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.543 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.543 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.543 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.543 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.543 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 nvme0n1 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.544 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.802 nvme0n1 00:26:04.802 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.802 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.802 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.803 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.060 nvme0n1 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.060 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.061 07:12:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.061 nvme0n1 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.061 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 nvme0n1 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.320 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.579 nvme0n1 00:26:05.579 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.579 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.579 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.579 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.580 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.838 nvme0n1 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.838 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.839 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.097 nvme0n1 00:26:06.097 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.097 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.097 07:12:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.097 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.097 07:12:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.097 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.355 nvme0n1 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.355 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.356 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.356 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.615 nvme0n1 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.615 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 nvme0n1 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.874 07:12:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.133 nvme0n1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.133 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.700 nvme0n1 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.700 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.701 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.701 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.960 nvme0n1 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.960 07:12:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.218 nvme0n1 00:26:08.218 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.218 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.218 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.218 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.218 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.477 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.736 nvme0n1 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWIyMDQ5YWI1N2U0ZjNiNmYxOWExYjQ0MjQ5Mjk3MmSYMtfu: 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTJhNWNkMzMwM2Y3MTQ1MDBiM2E2MTE0ZjU2MGIxNjg5NmE2ZGIyNDI2MTI3YTljNGU2OWY1MGJmN2YxZmRjMOaxthU=: 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.736 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.737 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.737 07:12:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.737 07:12:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.737 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.737 07:12:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.304 nvme0n1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.304 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.871 nvme0n1 00:26:09.871 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.871 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.871 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.871 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.871 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.871 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTQ0OThlNjQ4NTdkY2JiMzViMjFjYTZlYTdkOWJiNWSC1Pri: 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDVkYTUyZjJmY2FjMTg5YWYxM2Q5ODA5NTYyMGVlMDGSc8pt: 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.129 07:12:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.697 nvme0n1 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzQxMGIxMjU1OGNjYTNkZTZkNTlhMDY4NWNjZDJkYjkzOWI3OTcyMmQ0OWE5ODUzZacGNA==: 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNjNzE3Yjk3OWRmZTkwZDIzYzhlOWVmNmY2Y2FhYmXV2GZt: 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.697 07:12:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.264 nvme0n1 00:26:11.264 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.264 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.264 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.264 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU2YTk1MDZlYWFmYTI1ZGQwMTM3NjAwYjYxZGJlMWE3ODUyMTMxZDRmZDQ2MGI0NGQzYTg5MmIxMzYyOTJlZdvDX2Q=: 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.265 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.831 nvme0n1 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2MyNWYyNGFkZTc2Yjk0NzA4MTQwNDJlOWQ5MzY0MDgxMWYzZGZlZWMyYTAxZjhjNYUQRw==: 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTFjNjFkNjU5OWZmYzk1MzZjMDcyZjMwOGUyNjkzYzNiZWU1ZTBlZjFjZTE3ZGVlE9k05Q==: 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.831 2024/07/13 07:12:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:11.831 request: 00:26:11.831 { 00:26:11.831 "method": "bdev_nvme_attach_controller", 00:26:11.831 "params": { 00:26:11.831 "name": "nvme0", 00:26:11.831 "trtype": "tcp", 00:26:11.831 "traddr": "10.0.0.1", 00:26:11.831 "adrfam": "ipv4", 00:26:11.831 "trsvcid": "4420", 00:26:11.831 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:11.831 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:11.831 "prchk_reftag": false, 00:26:11.831 "prchk_guard": false, 00:26:11.831 "hdgst": false, 00:26:11.831 "ddgst": false 00:26:11.831 } 00:26:11.831 } 00:26:11.831 Got JSON-RPC error response 00:26:11.831 GoRPCClient: error on JSON-RPC call 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:11.831 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.832 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.832 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.832 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.100 2024/07/13 07:12:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:12.100 request: 00:26:12.100 { 00:26:12.100 "method": "bdev_nvme_attach_controller", 00:26:12.100 "params": { 00:26:12.100 "name": "nvme0", 00:26:12.100 "trtype": "tcp", 00:26:12.100 "traddr": "10.0.0.1", 00:26:12.100 "adrfam": "ipv4", 00:26:12.100 "trsvcid": "4420", 00:26:12.100 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:12.100 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:12.100 "prchk_reftag": false, 00:26:12.100 "prchk_guard": false, 00:26:12.100 "hdgst": false, 00:26:12.100 "ddgst": false, 00:26:12.100 "dhchap_key": "key2" 00:26:12.100 } 00:26:12.100 } 00:26:12.100 Got JSON-RPC error response 00:26:12.100 GoRPCClient: error on JSON-RPC call 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:12.100 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.101 2024/07/13 07:12:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:26:12.101 request: 00:26:12.101 { 00:26:12.101 "method": "bdev_nvme_attach_controller", 00:26:12.101 "params": { 00:26:12.101 "name": "nvme0", 00:26:12.101 "trtype": "tcp", 00:26:12.101 "traddr": "10.0.0.1", 00:26:12.101 "adrfam": "ipv4", 00:26:12.101 "trsvcid": "4420", 00:26:12.101 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:12.101 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:12.101 "prchk_reftag": false, 00:26:12.101 "prchk_guard": false, 00:26:12.101 "hdgst": false, 00:26:12.101 "ddgst": false, 00:26:12.101 "dhchap_key": "key1", 00:26:12.101 "dhchap_ctrlr_key": "ckey2" 00:26:12.101 } 00:26:12.101 } 00:26:12.101 Got JSON-RPC error response 00:26:12.101 GoRPCClient: error on JSON-RPC call 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:12.101 07:12:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:12.101 rmmod nvme_tcp 00:26:12.101 rmmod nvme_fabrics 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 110379 ']' 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 110379 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 110379 ']' 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 110379 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110379 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:12.101 killing process with pid 110379 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110379' 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 110379 00:26:12.101 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 110379 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:12.385 07:12:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:13.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:13.321 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:13.321 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:13.321 07:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.p62 /tmp/spdk.key-null.w6v /tmp/spdk.key-sha256.PmA /tmp/spdk.key-sha384.2Qo /tmp/spdk.key-sha512.2HO /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:13.321 07:12:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:13.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:13.839 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:13.839 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:13.839 ************************************ 00:26:13.839 END TEST nvmf_auth_host 00:26:13.839 ************************************ 00:26:13.839 00:26:13.839 real 0m33.002s 00:26:13.839 user 0m30.489s 00:26:13.839 sys 0m3.840s 00:26:13.839 07:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.839 07:12:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 07:12:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:13.839 07:12:21 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:13.839 07:12:21 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:13.839 07:12:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.839 07:12:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.839 07:12:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.839 ************************************ 00:26:13.839 START TEST nvmf_digest 00:26:13.839 ************************************ 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:13.839 * Looking for test storage... 00:26:13.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.839 07:12:21 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:13.840 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:14.099 Cannot find device "nvmf_tgt_br" 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:14.099 Cannot find device "nvmf_tgt_br2" 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:14.099 Cannot find device "nvmf_tgt_br" 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:14.099 Cannot find device "nvmf_tgt_br2" 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:14.099 07:12:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:14.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:14.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:14.099 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:14.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:26:14.358 00:26:14.358 --- 10.0.0.2 ping statistics --- 00:26:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.358 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:14.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:14.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:26:14.358 00:26:14.358 --- 10.0.0.3 ping statistics --- 00:26:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.358 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:14.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:14.358 00:26:14.358 --- 10.0.0.1 ping statistics --- 00:26:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.358 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.358 ************************************ 00:26:14.358 START TEST nvmf_digest_clean 00:26:14.358 ************************************ 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=111940 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 111940 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 111940 ']' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.358 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.359 07:12:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.359 [2024-07-13 07:12:22.309366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:14.359 [2024-07-13 07:12:22.310072] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.618 [2024-07-13 07:12:22.459887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.618 [2024-07-13 07:12:22.593452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.618 [2024-07-13 07:12:22.593529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.618 [2024-07-13 07:12:22.593544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.618 [2024-07-13 07:12:22.593578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.618 [2024-07-13 07:12:22.593588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.618 [2024-07-13 07:12:22.593622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.554 null0 00:26:15.554 [2024-07-13 07:12:23.497576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.554 [2024-07-13 07:12:23.521703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111997 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111997 /var/tmp/bperf.sock 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 111997 ']' 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.554 07:12:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.554 [2024-07-13 07:12:23.587876] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:15.554 [2024-07-13 07:12:23.588235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111997 ] 00:26:15.811 [2024-07-13 07:12:23.733209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.811 [2024-07-13 07:12:23.832968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.747 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.747 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:16.747 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:16.747 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:16.747 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:17.006 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.006 07:12:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.265 nvme0n1 00:26:17.265 07:12:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:17.265 07:12:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:17.265 Running I/O for 2 seconds... 00:26:19.797 00:26:19.797 Latency(us) 00:26:19.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.797 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:19.797 nvme0n1 : 2.00 21754.94 84.98 0.00 0.00 5876.97 3142.75 16801.05 00:26:19.797 =================================================================================================================== 00:26:19.797 Total : 21754.94 84.98 0.00 0.00 5876.97 3142.75 16801.05 00:26:19.797 0 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:19.797 | select(.opcode=="crc32c") 00:26:19.797 | "\(.module_name) \(.executed)"' 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111997 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 111997 ']' 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 111997 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111997 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111997' 00:26:19.797 killing process with pid 111997 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 111997 00:26:19.797 Received shutdown signal, test time was about 2.000000 seconds 00:26:19.797 00:26:19.797 Latency(us) 00:26:19.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.797 =================================================================================================================== 00:26:19.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 111997 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:19.797 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112082 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112082 /var/tmp/bperf.sock 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112082 ']' 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.798 07:12:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.056 [2024-07-13 07:12:27.905621] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:20.056 [2024-07-13 07:12:27.905760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112082 ] 00:26:20.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.056 Zero copy mechanism will not be used. 00:26:20.056 [2024-07-13 07:12:28.043784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.314 [2024-07-13 07:12:28.137665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.879 07:12:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.879 07:12:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:20.879 07:12:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:20.879 07:12:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:20.879 07:12:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.135 07:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.135 07:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.392 nvme0n1 00:26:21.650 07:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:21.650 07:12:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.650 Zero copy mechanism will not be used. 00:26:21.650 Running I/O for 2 seconds... 00:26:23.549 00:26:23.549 Latency(us) 00:26:23.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.549 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:23.549 nvme0n1 : 2.00 7827.81 978.48 0.00 0.00 2040.69 599.51 10485.76 00:26:23.549 =================================================================================================================== 00:26:23.549 Total : 7827.81 978.48 0.00 0.00 2040.69 599.51 10485.76 00:26:23.549 0 00:26:23.549 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:23.549 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:23.549 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.549 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:23.549 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.549 | select(.opcode=="crc32c") 00:26:23.549 | "\(.module_name) \(.executed)"' 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112082 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112082 ']' 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112082 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112082 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:23.807 killing process with pid 112082 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112082' 00:26:23.807 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112082 00:26:23.807 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.807 00:26:23.807 Latency(us) 00:26:23.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.807 =================================================================================================================== 00:26:23.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.808 07:12:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112082 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112173 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112173 /var/tmp/bperf.sock 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112173 ']' 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.066 07:12:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.324 [2024-07-13 07:12:32.151378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:24.324 [2024-07-13 07:12:32.151502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112173 ] 00:26:24.324 [2024-07-13 07:12:32.290960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.324 [2024-07-13 07:12:32.370253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.258 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.258 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:25.258 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:25.258 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:25.258 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:25.516 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.516 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.774 nvme0n1 00:26:25.774 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:25.774 07:12:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.774 Running I/O for 2 seconds... 00:26:28.364 00:26:28.364 Latency(us) 00:26:28.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.364 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:28.364 nvme0n1 : 2.01 25794.11 100.76 0.00 0.00 4957.18 2010.76 15728.64 00:26:28.364 =================================================================================================================== 00:26:28.364 Total : 25794.11 100.76 0.00 0.00 4957.18 2010.76 15728.64 00:26:28.364 0 00:26:28.364 07:12:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:28.364 07:12:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:28.364 07:12:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:28.364 07:12:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:28.364 07:12:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:28.364 | select(.opcode=="crc32c") 00:26:28.364 | "\(.module_name) \(.executed)"' 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112173 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112173 ']' 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112173 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112173 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:28.364 killing process with pid 112173 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112173' 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112173 00:26:28.364 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.364 00:26:28.364 Latency(us) 00:26:28.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.364 =================================================================================================================== 00:26:28.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112173 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:28.364 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112259 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112259 /var/tmp/bperf.sock 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 112259 ']' 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.365 07:12:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.365 [2024-07-13 07:12:36.348198] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:28.365 [2024-07-13 07:12:36.348291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112259 ] 00:26:28.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.365 Zero copy mechanism will not be used. 00:26:28.624 [2024-07-13 07:12:36.480344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.624 [2024-07-13 07:12:36.563610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.558 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.816 nvme0n1 00:26:29.817 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:29.817 07:12:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:30.075 Zero copy mechanism will not be used. 00:26:30.075 Running I/O for 2 seconds... 00:26:31.978 00:26:31.978 Latency(us) 00:26:31.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.978 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:31.978 nvme0n1 : 2.00 6833.96 854.25 0.00 0.00 2336.35 1653.29 4676.89 00:26:31.978 =================================================================================================================== 00:26:31.978 Total : 6833.96 854.25 0.00 0.00 2336.35 1653.29 4676.89 00:26:31.978 0 00:26:31.978 07:12:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:31.978 07:12:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:31.978 07:12:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:31.978 07:12:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:31.978 07:12:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:31.978 | select(.opcode=="crc32c") 00:26:31.978 | "\(.module_name) \(.executed)"' 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112259 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 112259 ']' 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 112259 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112259 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:32.236 killing process with pid 112259 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112259' 00:26:32.236 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 112259 00:26:32.236 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.236 00:26:32.236 Latency(us) 00:26:32.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.236 =================================================================================================================== 00:26:32.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.237 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 112259 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 111940 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 111940 ']' 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 111940 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111940 00:26:32.495 killing process with pid 111940 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111940' 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 111940 00:26:32.495 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 111940 00:26:32.753 ************************************ 00:26:32.753 END TEST nvmf_digest_clean 00:26:32.753 ************************************ 00:26:32.753 00:26:32.753 real 0m18.515s 00:26:32.753 user 0m34.149s 00:26:32.753 sys 0m5.201s 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.753 ************************************ 00:26:32.753 START TEST nvmf_digest_error 00:26:32.753 ************************************ 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=112373 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 112373 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112373 ']' 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.753 07:12:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.011 [2024-07-13 07:12:40.875723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:33.011 [2024-07-13 07:12:40.875818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.011 [2024-07-13 07:12:41.015283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.270 [2024-07-13 07:12:41.121210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.270 [2024-07-13 07:12:41.121268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.270 [2024-07-13 07:12:41.121280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.270 [2024-07-13 07:12:41.121287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.270 [2024-07-13 07:12:41.121294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.270 [2024-07-13 07:12:41.121317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.837 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.838 [2024-07-13 07:12:41.869838] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.838 07:12:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.097 null0 00:26:34.097 [2024-07-13 07:12:42.003272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.097 [2024-07-13 07:12:42.027402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112417 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112417 /var/tmp/bperf.sock 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112417 ']' 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:34.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.097 07:12:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.097 [2024-07-13 07:12:42.091644] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:34.097 [2024-07-13 07:12:42.092039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112417 ] 00:26:34.356 [2024-07-13 07:12:42.233220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.356 [2024-07-13 07:12:42.325835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.292 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.860 nvme0n1 00:26:35.860 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:35.860 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.860 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.860 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.860 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.860 07:12:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.860 Running I/O for 2 seconds... 00:26:35.860 [2024-07-13 07:12:43.818590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.860 [2024-07-13 07:12:43.819203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.860 [2024-07-13 07:12:43.819315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.860 [2024-07-13 07:12:43.831726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.860 [2024-07-13 07:12:43.831846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.860 [2024-07-13 07:12:43.831933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.860 [2024-07-13 07:12:43.843767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.860 [2024-07-13 07:12:43.843878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.860 [2024-07-13 07:12:43.843957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.860 [2024-07-13 07:12:43.854730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.860 [2024-07-13 07:12:43.854841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.854925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.861 [2024-07-13 07:12:43.865589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.861 [2024-07-13 07:12:43.865706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.865793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.861 [2024-07-13 07:12:43.879102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.861 [2024-07-13 07:12:43.879215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.879299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.861 [2024-07-13 07:12:43.890639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.861 [2024-07-13 07:12:43.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.890827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.861 [2024-07-13 07:12:43.901155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.861 [2024-07-13 07:12:43.901263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.901348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.861 [2024-07-13 07:12:43.912985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.861 [2024-07-13 07:12:43.913094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.913172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.861 [2024-07-13 07:12:43.925465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:35.861 [2024-07-13 07:12:43.925613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.861 [2024-07-13 07:12:43.925702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:43.937763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:43.937857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:43.937942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:43.950423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:43.950542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:43.950655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:43.961220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:43.961342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:43.961423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:43.973761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:43.973877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:43.973960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:43.984748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:43.984784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:43.984799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:43.995922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:43.995957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:43.995969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:44.007976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:44.008008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:44.008020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:44.019315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:44.019348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:44.019360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:44.030965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.120 [2024-07-13 07:12:44.030998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.120 [2024-07-13 07:12:44.031010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.120 [2024-07-13 07:12:44.040096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.040129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.040142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.052655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.052698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.064652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.064683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.064695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.077010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.077042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.077054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.088207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.088238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.088255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.099149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.099181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.110814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.110846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.110858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.122789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.122820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.122832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.134616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.134647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.134659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.144460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.144491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.144503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.156453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.156485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.156497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.169286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.169330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.180601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.180633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.180645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.121 [2024-07-13 07:12:44.191599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.121 [2024-07-13 07:12:44.191630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.121 [2024-07-13 07:12:44.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.380 [2024-07-13 07:12:44.201606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.380 [2024-07-13 07:12:44.201638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.380 [2024-07-13 07:12:44.201650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.213443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.213475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.213487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.225641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.225672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.225684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.236685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.236717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.236729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.249018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.249049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.249061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.259304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.259336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.259349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.271268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.271300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.271312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.284406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.284439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.284451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.296733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.296765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.296778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.306944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.306976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.306987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.317216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.317260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.317273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.328836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.328868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.328880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.341094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.341126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.341139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.354501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.354532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.354547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.364534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.364575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.364588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.376427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.376459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.376471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.388995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.389026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.389038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.401524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.401566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.401580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.413353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.413385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.413398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.424923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.424954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.424967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.436396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.436428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.436440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.381 [2024-07-13 07:12:44.448211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.381 [2024-07-13 07:12:44.448251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.381 [2024-07-13 07:12:44.448264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.460041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.460073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.460084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.470079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.470110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.470122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.482733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.482765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.482777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.495652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.495684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.495697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.507404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.507436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.507448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.517777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.517808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.530265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.530298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.530310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.542149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.542181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.542193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.554241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.554273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.554285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.564219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.564252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.564264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.576046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.576078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.589658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.589688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.589700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.598574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.598605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.598617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.610529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.610571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.610584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.622319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.622350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.622362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.641 [2024-07-13 07:12:44.633312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.641 [2024-07-13 07:12:44.633344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.641 [2024-07-13 07:12:44.633356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.645212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.645244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.645256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.655051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.655082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.655095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.666096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.666128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.666140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.679179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.679211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.679223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.690004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.690038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.690054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.702877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.702918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.702933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.642 [2024-07-13 07:12:44.714685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.642 [2024-07-13 07:12:44.714716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.642 [2024-07-13 07:12:44.714728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.726326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.726358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.726371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.737764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.737795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.737806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.749253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.749298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.749310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.763477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.763510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.763523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.775056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.775087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.775099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.787402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.787434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.787446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.798912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.798955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.798966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.810207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.810240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.810257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.821261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.821304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.821316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.833661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.833693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.833706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.845738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.845770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.845782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.858791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.858838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.858850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.870307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.870339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.870351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.882263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.882295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.892283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.892315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.892327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.904121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.904154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.904166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.916111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.916144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.916156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.928584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.928615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.928628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.901 [2024-07-13 07:12:44.939949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.901 [2024-07-13 07:12:44.939981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.901 [2024-07-13 07:12:44.939994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.902 [2024-07-13 07:12:44.951300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.902 [2024-07-13 07:12:44.951332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.902 [2024-07-13 07:12:44.951345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.902 [2024-07-13 07:12:44.963314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:36.902 [2024-07-13 07:12:44.963346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.902 [2024-07-13 07:12:44.963358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:44.975811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:44.975851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:44.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:44.986547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:44.986592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:44.986606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:44.998198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:44.998230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:44.998242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:45.009295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:45.009330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:45.009349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:45.022044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:45.022077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:45.022088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:45.033409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:45.033441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:45.033453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:45.043486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:45.043519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:45.043532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:45.054560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.161 [2024-07-13 07:12:45.054590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.161 [2024-07-13 07:12:45.054610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.161 [2024-07-13 07:12:45.066967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.066999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.067012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.079208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.079240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.079259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.090044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.090075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.090088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.101965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.101996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.102008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.114295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.114326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.114338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.124283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.124316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.124328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.135445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.135477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.135489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.147123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.147154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.147166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.160121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.160153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.160165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.169781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.169812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.169824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.182929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.182960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.182972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.194886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.194918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.194930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.207122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.207155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.207167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.217785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.217817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.217829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.162 [2024-07-13 07:12:45.231619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.162 [2024-07-13 07:12:45.231650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.162 [2024-07-13 07:12:45.231662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.241602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.241633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.241645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.252724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.252756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.252768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.263747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.263779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.263791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.277153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.277185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.277198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.287221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.287253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.287265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.299359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.299392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.299404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.311944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.311976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.311988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.323904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.323936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.323948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.334664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.334695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.347346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.347378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.347392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.359463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.359497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.359509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.372076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.372109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.372121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.382261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.382293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.382305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.395238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.395270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.395283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.407693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.407724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.407735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.420067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.420099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.420111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.431511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.431543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.431568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.443001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.443033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.443046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.454255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.454287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.454299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.464659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.464690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.464701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.474639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.474671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.474684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.422 [2024-07-13 07:12:45.487043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.422 [2024-07-13 07:12:45.487074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.422 [2024-07-13 07:12:45.487086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.496983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.497014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.497026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.510330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.510362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.510374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.520532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.520572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.520585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.532600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.532631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.532643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.545535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.545581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.545594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.557309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.557340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.557352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.568655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.568684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.568696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.578932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.578963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.578975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.591370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.591401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.591412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.602762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.602794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.602805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.613629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.613659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.627057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.627088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.627100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.637777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.637808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.637820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.649375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.649406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.649418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.661745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.661775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.661787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.672615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.672645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.672657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.684187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.684217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.684229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.696445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.696476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.696488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.707235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.707266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.707279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.717876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.717906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.717918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.729067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.729098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.729110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.741636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.741665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.741678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.683 [2024-07-13 07:12:45.752600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.683 [2024-07-13 07:12:45.752630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.683 [2024-07-13 07:12:45.752646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.948 [2024-07-13 07:12:45.763655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.948 [2024-07-13 07:12:45.763684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.948 [2024-07-13 07:12:45.763697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.948 [2024-07-13 07:12:45.776055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.948 [2024-07-13 07:12:45.776096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.948 [2024-07-13 07:12:45.776108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.948 [2024-07-13 07:12:45.787475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.948 [2024-07-13 07:12:45.787506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.948 [2024-07-13 07:12:45.787518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.948 [2024-07-13 07:12:45.797622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1234c10) 00:26:37.948 [2024-07-13 07:12:45.797652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.948 [2024-07-13 07:12:45.797664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.948 00:26:37.948 Latency(us) 00:26:37.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.948 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:37.948 nvme0n1 : 2.00 21811.70 85.20 0.00 0.00 5860.90 3068.28 17635.14 00:26:37.948 =================================================================================================================== 00:26:37.948 Total : 21811.70 85.20 0.00 0.00 5860.90 3068.28 17635.14 00:26:37.948 0 00:26:37.948 07:12:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.948 07:12:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.948 07:12:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.948 | .driver_specific 00:26:37.948 | .nvme_error 00:26:37.948 | .status_code 00:26:37.948 | .command_transient_transport_error' 00:26:37.948 07:12:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112417 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112417 ']' 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112417 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112417 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:38.209 killing process with pid 112417 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112417' 00:26:38.209 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.209 00:26:38.209 Latency(us) 00:26:38.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.209 =================================================================================================================== 00:26:38.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112417 00:26:38.209 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112417 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112507 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112507 /var/tmp/bperf.sock 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112507 ']' 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.467 07:12:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.467 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.467 Zero copy mechanism will not be used. 00:26:38.467 [2024-07-13 07:12:46.482266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:38.467 [2024-07-13 07:12:46.482372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112507 ] 00:26:38.725 [2024-07-13 07:12:46.620999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.725 [2024-07-13 07:12:46.726920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.661 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.920 nvme0n1 00:26:39.920 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:39.920 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.920 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.920 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.920 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.920 07:12:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.182 Zero copy mechanism will not be used. 00:26:40.182 Running I/O for 2 seconds... 00:26:40.182 [2024-07-13 07:12:48.116628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.116687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.116701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.121225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.121259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.121271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.125370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.125402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.125414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.129302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.129334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.129345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.131663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.131692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.131702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.136210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.136242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.136260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.140976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.182 [2024-07-13 07:12:48.141008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.182 [2024-07-13 07:12:48.141019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.182 [2024-07-13 07:12:48.145441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.145472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.145484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.148126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.148156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.148167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.152003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.152034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.152046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.155929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.155960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.155972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.159791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.159823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.159834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.162647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.162679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.162690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.166195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.166226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.166238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.170572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.170602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.170614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.173646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.173676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.173688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.177597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.177628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.177640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.181476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.181508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.181520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.184361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.184391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.184402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.188085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.188117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.188128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.191588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.191619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.191630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.194983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.195015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.195026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.198332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.198362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.198373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.202642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.202672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.202683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.206338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.206369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.206380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.209259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.209290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.209302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.212983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.213013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.213024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.216703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.216734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.216746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.220488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.220519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.220530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.223708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.223738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.223750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.227422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.227453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.227464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.231521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.231563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.231575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.234298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.234328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.234339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.238183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.238214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.238225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.242533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.242574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.242586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.246457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.183 [2024-07-13 07:12:48.246494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.183 [2024-07-13 07:12:48.246506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.183 [2024-07-13 07:12:48.249491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.184 [2024-07-13 07:12:48.249522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.184 [2024-07-13 07:12:48.249534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.184 [2024-07-13 07:12:48.253773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.184 [2024-07-13 07:12:48.253804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.184 [2024-07-13 07:12:48.253815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.257443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.257474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.257485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.260762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.260792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.260803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.264161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.264191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.264203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.267955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.267986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.267997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.271040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.271070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.271082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.275124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.275155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.275167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.279595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.279637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.279648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.284128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.284159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.284170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.286875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.286905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.286916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.290095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.290125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.290137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.294375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.294407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.294418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.297111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.297141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.297155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.300759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.300790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.300801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.304121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.304152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.304163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.308353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.308384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.308395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.311844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.311874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.311886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.314995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.315026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.319026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.319057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.319068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.322644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.322691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.322703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.326387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.326416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.326430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.330439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.330469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.330507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.334363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.334394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.334405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.337361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.337392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.337403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.340672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.340703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.340714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.344121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.344152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-07-13 07:12:48.344164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.445 [2024-07-13 07:12:48.348306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.445 [2024-07-13 07:12:48.348336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.348348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.350790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.350834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.354877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.354908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.354920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.358566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.358595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.358606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.361629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.361659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.361671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.365237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.365269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.365280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.368605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.368636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.368648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.371964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.371995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.372006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.376007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.376040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.376052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.379135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.379166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.379177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.382804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.382835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.382847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.386764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.386795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.386806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.390622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.390652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.390663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.393369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.393400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.393411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.396967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.396997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.397008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.400523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.400566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.400579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.403389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.403420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.403431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.407134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.407165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.407179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.410902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.410933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.410944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.414443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.414473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.414495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.418156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.418187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.418198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.421478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.421508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.421519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.425162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.425192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.425203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.428245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.428276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.428287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.431671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.431701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.431720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.435241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.435272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.435282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.438832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.438879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.438895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.442439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.442470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.442494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.445802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.445833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-07-13 07:12:48.445844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.446 [2024-07-13 07:12:48.449076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.446 [2024-07-13 07:12:48.449107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.449118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.452724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.452766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.455893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.455923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.455934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.459749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.459791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.463863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.463893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.463904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.466967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.466996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.467007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.470688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.470720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.470732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.475187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.475218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.475229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.479548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.479588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.479601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.481877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.481907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.481918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.485971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.486003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.486014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.489319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.489351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.489362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.492756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.492786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.492797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.496217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.496261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.496272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.499840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.499871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.499883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.503237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.503269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.503280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.506796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.506827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.506838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.509973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.510005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.510016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.513420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.513451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.513466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.447 [2024-07-13 07:12:48.517384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.447 [2024-07-13 07:12:48.517414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-07-13 07:12:48.517426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.520939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.520970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.520981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.524438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.524470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.524481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.528008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.528040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.531247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.531278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.531289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.534943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.534974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.534986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.539024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.539055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.539067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.541686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.541715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.541726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.545035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.545065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.545076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.549272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.549303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.553108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.553138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.553150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.556123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.556154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.556166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.560311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.560342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.560353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.564660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.564691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.564702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.568395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.568426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.568437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.571212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.571243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.571260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.575761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.575792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.575803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.579933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.579963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.579974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.582592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.582623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.582634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.708 [2024-07-13 07:12:48.586324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.708 [2024-07-13 07:12:48.586355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-07-13 07:12:48.586366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.589173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.589202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.589213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.592845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.592875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.592886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.596908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.596951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.600288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.600319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.600330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.603623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.603654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.603665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.607367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.607398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.607409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.610494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.610524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.610535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.613943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.613972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.613983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.617600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.617631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.617642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.621309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.621340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.621351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.624531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.624571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.624583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.628000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.628030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.628040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.631637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.631668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.631679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.635137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.635168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.635179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.638418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.638449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.638460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.641234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.641264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.641275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.645089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.645120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.645132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.648296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.648326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.648337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.652285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.652316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.652328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.656422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.656454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.656465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.659369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.659400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.659411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.662972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.663003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.663015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.667036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.667066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.667078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.670110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.670139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.670150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.673817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.673847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.673859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.677211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.677242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.677260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.680410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.680440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.680451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.684275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.684307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.684318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.709 [2024-07-13 07:12:48.687966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.709 [2024-07-13 07:12:48.687996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.709 [2024-07-13 07:12:48.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.691201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.691232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.691243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.694569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.694599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.694611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.697981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.698012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.698023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.701458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.701489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.701500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.705268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.705299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.705316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.708696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.708727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.708738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.712200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.712232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.712243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.716278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.716309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.716321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.719087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.719118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.719130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.722708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.722741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.722752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.727261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.727292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.727304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.730297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.730343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.733955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.733985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.733996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.737043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.737073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.737084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.740432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.740462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.740473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.744148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.744179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.744190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.747279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.747309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.747320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.751224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.751257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.751268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.755874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.755906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.755917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.759627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.759658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.759670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.762926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.762956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.762967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.766592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.766623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.766634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.771056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.771088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.771103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.774178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.774208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.774219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.710 [2024-07-13 07:12:48.777977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.710 [2024-07-13 07:12:48.778007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.710 [2024-07-13 07:12:48.778018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.782503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.782549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.782561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.786830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.786862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.786873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.790358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.790388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.790400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.793448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.793478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.793489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.796987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.797017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.797028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.800416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.800447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.800458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.804324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.804355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.807818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.807848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.807860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.811427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.811459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.811470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.815897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.815928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.815939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.819059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.819089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.819100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.822931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.822963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.822974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.826996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.827027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.827038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.829876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.829907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.829918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.833669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.833701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.833712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.837321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.837352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.837370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.840317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.840347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.840358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.844472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.844503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.844514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.849060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.849104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.853001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.853030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.853041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.857137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.857168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.857179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.972 [2024-07-13 07:12:48.859580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.972 [2024-07-13 07:12:48.859617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.972 [2024-07-13 07:12:48.859629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.864225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.864257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.864268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.868261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.868292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.868304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.871338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.871380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.874985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.875016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.875028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.877933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.877963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.877973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.881214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.881245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.881264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.884956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.884986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.884997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.888573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.888602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.888614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.892486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.892516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.895333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.895364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.895375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.899465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.899497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.899508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.903684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.903715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.903726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.907156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.907186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.907197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.910106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.910135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.910146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.914112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.914142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.914153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.917493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.917524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.917535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.920118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.920150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.920161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.923945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.923976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.923987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.928298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.928330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.928342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.932123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.932154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.932165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.934927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.934963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.934974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.939063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.939094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.973 [2024-07-13 07:12:48.939105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.973 [2024-07-13 07:12:48.943589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.973 [2024-07-13 07:12:48.943633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.943644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.946536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.946581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.946593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.950066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.950098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.950109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.953601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.953631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.953643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.956859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.956890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.956901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.960423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.960455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.960466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.963841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.963872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.963888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.966945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.966976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.966987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.970660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.970691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.970702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.973404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.973445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.977201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.977232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.977243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.980179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.980210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.980221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.983681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.983711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.983722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.986758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.986790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.986801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.990293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.990324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.990335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.993814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.993845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.993857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:48.997895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:48.997944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:48.997956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.000620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.000649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:49.000660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.004652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.004682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:49.004694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.009080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:49.009123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.012846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.012876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:49.012888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.015385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.015415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:49.015427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.019457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.019488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.974 [2024-07-13 07:12:49.019500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.974 [2024-07-13 07:12:49.022459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.974 [2024-07-13 07:12:49.022504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.022516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.975 [2024-07-13 07:12:49.026451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.975 [2024-07-13 07:12:49.026501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.026521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.975 [2024-07-13 07:12:49.030422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.975 [2024-07-13 07:12:49.030454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.030465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.975 [2024-07-13 07:12:49.033859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.975 [2024-07-13 07:12:49.033889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.975 [2024-07-13 07:12:49.036839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.975 [2024-07-13 07:12:49.036870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.036881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.975 [2024-07-13 07:12:49.040624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.975 [2024-07-13 07:12:49.040660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.040681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.975 [2024-07-13 07:12:49.043676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:40.975 [2024-07-13 07:12:49.043706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.975 [2024-07-13 07:12:49.043717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.047503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.047535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.047546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.050146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.050176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.050187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.054183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.054214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.054225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.058210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.058241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.058255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.061448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.061478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.061489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.065562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.065589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.065600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.069103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.069133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.069145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.072058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.072089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.072100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.075915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.075946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.075957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.079463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.079495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.079506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.081964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.081994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.082011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.085413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.085443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.085458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.089069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.089098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.089109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.091993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.092023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.092034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.094841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.237 [2024-07-13 07:12:49.094872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.237 [2024-07-13 07:12:49.094886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.237 [2024-07-13 07:12:49.098162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.098192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.098204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.101517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.101547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.101572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.105182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.105214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.105225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.108607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.108638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.108648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.112340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.112371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.112382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.116408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.116438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.116449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.119233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.119264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.119276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.123193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.123224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.123236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.126864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.126895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.126906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.129873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.129902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.129913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.133354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.133384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.133396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.136970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.137001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.137013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.139995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.140026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.140037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.143704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.143735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.143746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.147032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.147063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.147075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.150868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.150899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.150910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.154581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.154611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.154621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.158181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.158212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.158223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.161950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.161979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.161990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.165263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.165293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.165305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.168843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.168874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.168885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.238 [2024-07-13 07:12:49.172804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.238 [2024-07-13 07:12:49.172835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.238 [2024-07-13 07:12:49.172845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.176004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.176034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.176045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.178952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.178983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.178994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.182197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.182226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.182237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.186287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.186317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.186329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.190012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.190043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.190054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.193277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.193309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.193320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.197070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.197100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.200340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.200371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.200382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.203908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.203938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.203950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.207659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.207690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.207702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.210895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.210925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.210937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.214689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.214720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.214731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.217900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.217931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.217942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.221263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.221293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.221304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.225027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.225058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.225069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.229132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.229163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.229174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.232144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.232174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.232185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.235527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.235569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.235583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.239292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.239323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.239334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.242691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.242722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.242733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.246796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.246826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.246837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.249847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.239 [2024-07-13 07:12:49.249876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.239 [2024-07-13 07:12:49.249887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.239 [2024-07-13 07:12:49.253189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.253221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.253232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.256863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.256894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.256905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.260241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.260272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.260283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.264023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.264054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.264065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.267697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.267728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.267740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.271375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.271407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.271418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.274708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.274739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.278615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.278647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.278658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.281525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.281567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.281580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.285013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.285044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.285055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.289008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.289039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.289051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.293282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.293313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.293324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.295676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.295705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.295716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.299802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.299834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.299845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.302987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.303018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.303030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.240 [2024-07-13 07:12:49.306530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.240 [2024-07-13 07:12:49.306574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.240 [2024-07-13 07:12:49.306587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.310271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.310314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.310325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.314003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.314034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.314046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.317229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.317260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.317271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.320891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.320922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.320934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.324134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.324166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.324177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.327591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.327621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.327633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.331064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.331095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.331107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.502 [2024-07-13 07:12:49.335140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.502 [2024-07-13 07:12:49.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.502 [2024-07-13 07:12:49.335183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.337825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.337856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.337867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.341608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.341639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.341650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.345490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.345533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.348797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.348828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.348839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.352525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.352570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.352583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.356562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.356591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.356603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.359628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.359658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.359669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.363162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.363194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.363206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.366892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.366924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.366935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.369857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.369887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.369898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.373931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.373963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.373975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.378075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.378106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.378118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.381637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.381666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.381677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.384088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.384120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.384131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.387937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.387980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.392134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.392165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.392177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.396223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.396255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.396266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.400316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.400347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.400359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.402655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.402684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.402695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.406691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.406723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.406734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.409520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.409563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.409576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.413080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.413112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.503 [2024-07-13 07:12:49.413124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.503 [2024-07-13 07:12:49.416490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.503 [2024-07-13 07:12:49.416523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.416535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.420001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.420033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.420044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.423620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.423651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.423663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.427586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.427617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.427628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.430519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.430568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.430581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.434416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.434449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.434460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.438546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.438586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.438598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.441176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.441206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.441217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.444870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.444901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.444912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.448252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.448283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.448295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.451495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.451525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.451537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.455460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.455493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.455505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.458508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.458545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.458571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.462372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.462403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.462414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.466680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.466711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.466722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.470544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.470584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.474843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.474890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.474901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.477274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.477304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.477315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.481818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.481849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.481861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.485254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.485286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.485297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.488348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.488380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.488391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.491966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.491998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.492009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.504 [2024-07-13 07:12:49.495374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.504 [2024-07-13 07:12:49.495406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.504 [2024-07-13 07:12:49.495417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.498789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.498836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.498847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.502461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.502501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.502513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.505447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.505479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.505490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.509335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.509366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.509378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.513140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.513171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.513185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.516191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.516221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.516232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.519506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.519538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.519563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.523507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.523539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.523566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.526238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.526280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.526291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.530062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.530094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.530108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.533219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.533262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.533273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.536410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.536441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.536453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.540204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.540236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.540249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.543386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.543428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.546605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.546636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.546647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.550700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.550732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.550744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.555246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.555278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.555289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.559525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.559568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.559581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.563214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.563245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.563262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.566270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.566301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.566312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.570057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.570089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.570101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.505 [2024-07-13 07:12:49.574207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.505 [2024-07-13 07:12:49.574238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.505 [2024-07-13 07:12:49.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.577888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.577918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.577929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.580973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.581003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.581015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.584508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.584541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.584563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.588030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.588062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.588073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.591312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.591343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.591355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.595402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.595433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.595445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.598320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.598350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.598365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.602581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.602612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.602624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.605846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.605877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.605888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.609337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.609380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.612465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.612496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.612507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.616344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.616375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.616386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.619554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.619597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.622765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.622798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.622809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.767 [2024-07-13 07:12:49.627254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.767 [2024-07-13 07:12:49.627286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.767 [2024-07-13 07:12:49.627298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.630127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.630169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.633856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.633887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.633898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.637962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.637995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.638006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.641016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.641046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.641057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.644541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.644580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.644592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.648523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.648565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.648578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.652428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.652459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.652470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.655133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.655164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.655176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.658974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.659005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.659017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.662474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.662514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.662532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.665582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.665611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.665623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.668780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.668811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.668823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.672280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.672311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.672323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.675381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.675424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.679187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.679219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.682981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.683012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.683024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.686670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.686701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.686712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.689656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.689686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.689697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.693535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.693575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.693587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.696951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.696981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.696993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.700714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.700745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.700756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.704470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.704501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.704513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.768 [2024-07-13 07:12:49.707777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.768 [2024-07-13 07:12:49.707808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.768 [2024-07-13 07:12:49.707819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.710984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.711015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.711027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.714626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.714658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.714669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.718223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.718254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.718265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.721663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.721693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.721704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.725777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.725808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.725819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.729751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.729781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.729793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.732578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.732608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.732619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.736207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.736239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.736251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.740066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.740108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.743430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.743462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.743473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.747292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.747324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.747336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.751723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.751754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.751766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.754896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.754927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.754938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.758774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.758807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.758818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.762258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.762291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.762302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.765525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.765566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.765579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.769203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.769236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.769247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.773208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.773239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.773250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.776416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.776458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.779270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.779301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.779312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.783246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.783278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.783289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.787648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.787680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.769 [2024-07-13 07:12:49.787691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.769 [2024-07-13 07:12:49.791127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.769 [2024-07-13 07:12:49.791158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.791169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.794277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.794307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.794318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.797243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.797274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.797286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.800789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.800820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.800831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.804782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.804814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.804825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.808669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.808712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.811759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.811790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.811801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.815206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.815238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.815249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.818953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.818985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.818997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.822190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.822222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.822234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.825763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.825794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.825805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.830008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.830041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.830052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.833124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.833156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.833168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.770 [2024-07-13 07:12:49.836540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:41.770 [2024-07-13 07:12:49.836582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.770 [2024-07-13 07:12:49.836593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.031 [2024-07-13 07:12:49.840448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.031 [2024-07-13 07:12:49.840481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-13 07:12:49.840492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.031 [2024-07-13 07:12:49.844088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.031 [2024-07-13 07:12:49.844120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.031 [2024-07-13 07:12:49.844131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.847790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.847833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.847845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.851575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.851616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.851631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.855508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.855557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.855585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.858996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.859027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.859039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.862851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.862903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.862915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.866199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.866230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.866241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.869498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.869529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.869540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.873394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.873425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.873436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.877437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.877468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.877479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.880429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.880460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.880471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.883848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.883880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.883891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.887673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.887704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.887715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.891287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.891318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.891332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.894954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.894985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.894996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.897687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.897717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.897729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.901155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.901197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.905060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.905090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.905101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.907989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.908020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.908031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.911685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.911717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.911729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.915463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.915495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.915506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.919041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.919073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.919084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.922328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.922359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.922370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.925634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.925665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.925677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.929413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.929443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.929454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.933234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.032 [2024-07-13 07:12:49.933264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.032 [2024-07-13 07:12:49.933275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.032 [2024-07-13 07:12:49.935977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.936008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.936019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.939672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.939715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.943585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.943616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.943627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.946277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.946307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.946319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.949858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.949888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.949899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.953748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.953780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.953791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.956402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.956433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.956444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.960262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.960293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.960304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.963041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.963072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.963083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.966799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.966830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.966841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.970317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.970348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.970359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.973937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.973968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.973979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.977607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.977637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.977649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.981808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.981840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.981851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.984221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.984264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.984275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.988803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.988833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.988845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.992346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.992378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.992389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.995488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.995530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:49.999266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:49.999298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:49.999310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.003123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.003153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.003165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.006160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.006193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.006206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.010386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.010429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.010442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.014517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.014568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.014582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.018467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.018526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.018544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.021829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.021862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.021874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.026106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.026139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.026152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.030088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.030122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.030135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.034108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.034142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.034155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.038375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.033 [2024-07-13 07:12:50.038410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.033 [2024-07-13 07:12:50.038421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.033 [2024-07-13 07:12:50.041937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.041969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.041981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.045904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.045937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.045948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.050228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.050261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.050273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.053423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.053455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.053467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.057407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.057439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.057451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.062084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.062116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.062127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.066715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.066750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.066762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.069245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.069285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.073704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.073736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.073747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.077313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.077345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.077356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.080258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.080290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.080301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.084237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.084269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.084280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.088328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.088361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.088372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.090696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.090726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.090737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.095234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.095266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.095278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.099194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.099225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.099236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.034 [2024-07-13 07:12:50.101764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.034 [2024-07-13 07:12:50.101795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.034 [2024-07-13 07:12:50.101806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.293 [2024-07-13 07:12:50.105747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17f62f0) 00:26:42.293 [2024-07-13 07:12:50.105779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:42.293 [2024-07-13 07:12:50.105790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.293 00:26:42.293 Latency(us) 00:26:42.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:42.293 nvme0n1 : 2.00 8636.12 1079.52 0.00 0.00 1849.12 569.72 9175.04 00:26:42.293 =================================================================================================================== 00:26:42.293 Total : 8636.12 1079.52 0.00 0.00 1849.12 569.72 9175.04 00:26:42.293 0 00:26:42.293 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.293 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.294 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.294 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.294 | .driver_specific 00:26:42.294 | .nvme_error 00:26:42.294 | .status_code 00:26:42.294 | .command_transient_transport_error' 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 557 > 0 )) 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112507 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112507 ']' 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112507 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112507 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:42.553 killing process with pid 112507 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112507' 00:26:42.553 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.553 00:26:42.553 Latency(us) 00:26:42.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.553 =================================================================================================================== 00:26:42.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112507 00:26:42.553 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112507 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112592 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112592 /var/tmp/bperf.sock 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112592 ']' 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.812 07:12:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.812 [2024-07-13 07:12:50.786449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:42.812 [2024-07-13 07:12:50.786591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112592 ] 00:26:43.071 [2024-07-13 07:12:50.928798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.071 [2024-07-13 07:12:51.047561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.008 07:12:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.008 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.008 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.008 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.282 nvme0n1 00:26:44.282 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:44.282 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.282 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.282 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.282 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:44.282 07:12:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.562 Running I/O for 2 seconds... 00:26:44.562 [2024-07-13 07:12:52.452115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee5c8 00:26:44.562 [2024-07-13 07:12:52.452996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.562 [2024-07-13 07:12:52.453063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.562 [2024-07-13 07:12:52.462257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fac10 00:26:44.562 [2024-07-13 07:12:52.463268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.562 [2024-07-13 07:12:52.463319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.473933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eff18 00:26:44.563 [2024-07-13 07:12:52.475099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.475149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.485085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fb480 00:26:44.563 [2024-07-13 07:12:52.486410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.486455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.495118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e8d30 00:26:44.563 [2024-07-13 07:12:52.496147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.496190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.505436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e2c28 00:26:44.563 [2024-07-13 07:12:52.506429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.506458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.516480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e7818 00:26:44.563 [2024-07-13 07:12:52.517572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.517614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.528253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e49b0 00:26:44.563 [2024-07-13 07:12:52.529808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.529837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.537292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e5ec8 00:26:44.563 [2024-07-13 07:12:52.538373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.538414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.548994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e5ec8 00:26:44.563 [2024-07-13 07:12:52.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.550103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.559004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f7da8 00:26:44.563 [2024-07-13 07:12:52.560021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.560073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.569192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fda78 00:26:44.563 [2024-07-13 07:12:52.570114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.570155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.580262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fa3a0 00:26:44.563 [2024-07-13 07:12:52.581352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.581381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.591431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eb760 00:26:44.563 [2024-07-13 07:12:52.592916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.592962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.600673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fb480 00:26:44.563 [2024-07-13 07:12:52.601496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.601534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.613111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fb480 00:26:44.563 [2024-07-13 07:12:52.613903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.613953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.624835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fb480 00:26:44.563 [2024-07-13 07:12:52.625635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.563 [2024-07-13 07:12:52.625676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.563 [2024-07-13 07:12:52.636981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fb480 00:26:44.822 [2024-07-13 07:12:52.637817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.822 [2024-07-13 07:12:52.637858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.822 [2024-07-13 07:12:52.647510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2d80 00:26:44.822 [2024-07-13 07:12:52.648249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.822 [2024-07-13 07:12:52.648282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.822 [2024-07-13 07:12:52.658317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee190 00:26:44.823 [2024-07-13 07:12:52.659182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.659216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.669540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f7da8 00:26:44.823 [2024-07-13 07:12:52.670447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.670522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.680853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fef90 00:26:44.823 [2024-07-13 07:12:52.681908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.681961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.691852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f6cc8 00:26:44.823 [2024-07-13 07:12:52.692895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.692924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.702787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1ca0 00:26:44.823 [2024-07-13 07:12:52.703831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.703859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.713254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e6738 00:26:44.823 [2024-07-13 07:12:52.714371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.714411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.724976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ef270 00:26:44.823 [2024-07-13 07:12:52.725823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.725865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.736103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f35f0 00:26:44.823 [2024-07-13 07:12:52.737116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.737160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.746768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fe2e8 00:26:44.823 [2024-07-13 07:12:52.747700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.747741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.759340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3498 00:26:44.823 [2024-07-13 07:12:52.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.761256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.767120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e2c28 00:26:44.823 [2024-07-13 07:12:52.768139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.768178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.778668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e7c50 00:26:44.823 [2024-07-13 07:12:52.779864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.779905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.789768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f57b0 00:26:44.823 [2024-07-13 07:12:52.791150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.791192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.801192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ea248 00:26:44.823 [2024-07-13 07:12:52.802539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.802592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.812371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f3e60 00:26:44.823 [2024-07-13 07:12:52.813840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.813868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.823650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f20d8 00:26:44.823 [2024-07-13 07:12:52.825333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.825372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.832207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f46d0 00:26:44.823 [2024-07-13 07:12:52.833445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.833485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.846199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee5c8 00:26:44.823 [2024-07-13 07:12:52.848140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.848169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.855908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2948 00:26:44.823 [2024-07-13 07:12:52.857053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.857094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.867076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3498 00:26:44.823 [2024-07-13 07:12:52.868074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.868115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.877341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190de8a8 00:26:44.823 [2024-07-13 07:12:52.879083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.879125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.823 [2024-07-13 07:12:52.890045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f6cc8 00:26:44.823 [2024-07-13 07:12:52.891828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.823 [2024-07-13 07:12:52.891858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.899949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fd640 00:26:45.081 [2024-07-13 07:12:52.901361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.081 [2024-07-13 07:12:52.901392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.912907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e23b8 00:26:45.081 [2024-07-13 07:12:52.914870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.081 [2024-07-13 07:12:52.914910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.923115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e5ec8 00:26:45.081 [2024-07-13 07:12:52.924456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.081 [2024-07-13 07:12:52.924498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.934417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fb480 00:26:45.081 [2024-07-13 07:12:52.935865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.081 [2024-07-13 07:12:52.935918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.946966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fd640 00:26:45.081 [2024-07-13 07:12:52.948255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.081 [2024-07-13 07:12:52.948300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.958674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e5ec8 00:26:45.081 [2024-07-13 07:12:52.960314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.081 [2024-07-13 07:12:52.960365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.081 [2024-07-13 07:12:52.967413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190de8a8 00:26:45.081 [2024-07-13 07:12:52.968359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:52.968401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:52.980073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fdeb0 00:26:45.082 [2024-07-13 07:12:52.981625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:52.981668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:52.990914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e23b8 00:26:45.082 [2024-07-13 07:12:52.992567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:52.992609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.001502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee190 00:26:45.082 [2024-07-13 07:12:53.003212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.003252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.010061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e01f8 00:26:45.082 [2024-07-13 07:12:53.011009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.011050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.020639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e8d30 00:26:45.082 [2024-07-13 07:12:53.021936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.021978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.031181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190df988 00:26:45.082 [2024-07-13 07:12:53.032435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.032476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.040389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ebfd0 00:26:45.082 [2024-07-13 07:12:53.041308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.041348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.051282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190df988 00:26:45.082 [2024-07-13 07:12:53.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.052460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.061862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee5c8 00:26:45.082 [2024-07-13 07:12:53.063065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.063114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.071804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f6458 00:26:45.082 [2024-07-13 07:12:53.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.072875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.083974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e5ec8 00:26:45.082 [2024-07-13 07:12:53.085403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.085449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.093899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f20d8 00:26:45.082 [2024-07-13 07:12:53.095350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.095395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.102941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eb760 00:26:45.082 [2024-07-13 07:12:53.103492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.103522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.115148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e6738 00:26:45.082 [2024-07-13 07:12:53.116548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.116597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.126613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eea00 00:26:45.082 [2024-07-13 07:12:53.128291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.128331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.137946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f81e0 00:26:45.082 [2024-07-13 07:12:53.139658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.139701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.082 [2024-07-13 07:12:53.149563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eaab8 00:26:45.082 [2024-07-13 07:12:53.151342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.082 [2024-07-13 07:12:53.151387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.162166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f4b08 00:26:45.340 [2024-07-13 07:12:53.163882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.163916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.172601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ddc00 00:26:45.340 [2024-07-13 07:12:53.173731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.173763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.183800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f4298 00:26:45.340 [2024-07-13 07:12:53.184845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.184872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.196309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e1b48 00:26:45.340 [2024-07-13 07:12:53.197482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.197524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.206536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e73e0 00:26:45.340 [2024-07-13 07:12:53.207716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.207757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.218401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e73e0 00:26:45.340 [2024-07-13 07:12:53.219696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.219727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.230996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e73e0 00:26:45.340 [2024-07-13 07:12:53.232655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.232698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.238707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f4298 00:26:45.340 [2024-07-13 07:12:53.239584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.239611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.251825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f81e0 00:26:45.340 [2024-07-13 07:12:53.253101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.340 [2024-07-13 07:12:53.253131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.340 [2024-07-13 07:12:53.263965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eaab8 00:26:45.340 [2024-07-13 07:12:53.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.266026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.273514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f8618 00:26:45.341 [2024-07-13 07:12:53.274840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.274888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.284644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f8618 00:26:45.341 [2024-07-13 07:12:53.285889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.285930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.294669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f6890 00:26:45.341 [2024-07-13 07:12:53.296002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.296036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.307443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f6890 00:26:45.341 [2024-07-13 07:12:53.309324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.309351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.315626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f57b0 00:26:45.341 [2024-07-13 07:12:53.316626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.316667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.327490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ebb98 00:26:45.341 [2024-07-13 07:12:53.328237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.328266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.339141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1868 00:26:45.341 [2024-07-13 07:12:53.340629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.340659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.349150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190de038 00:26:45.341 [2024-07-13 07:12:53.350531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.350586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.357554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e23b8 00:26:45.341 [2024-07-13 07:12:53.358328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.358369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.370211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f5be8 00:26:45.341 [2024-07-13 07:12:53.371655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.371690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.379364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f5378 00:26:45.341 [2024-07-13 07:12:53.380133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.380161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.389931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3d08 00:26:45.341 [2024-07-13 07:12:53.391095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.391137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.402177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f0788 00:26:45.341 [2024-07-13 07:12:53.403895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.341 [2024-07-13 07:12:53.403923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.341 [2024-07-13 07:12:53.413567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f31b8 00:26:45.598 [2024-07-13 07:12:53.415594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.598 [2024-07-13 07:12:53.415635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.598 [2024-07-13 07:12:53.423333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f57b0 00:26:45.599 [2024-07-13 07:12:53.424524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.424578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.435172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f57b0 00:26:45.599 [2024-07-13 07:12:53.436963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.437015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.442868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e4140 00:26:45.599 [2024-07-13 07:12:53.443631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.443661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.453550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e6fa8 00:26:45.599 [2024-07-13 07:12:53.454601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.454632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.465301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e38d0 00:26:45.599 [2024-07-13 07:12:53.466518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.466571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.476325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e95a0 00:26:45.599 [2024-07-13 07:12:53.477774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.477801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.484659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190feb58 00:26:45.599 [2024-07-13 07:12:53.485426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.485475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.495843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ef270 00:26:45.599 [2024-07-13 07:12:53.496957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.497021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.507355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee5c8 00:26:45.599 [2024-07-13 07:12:53.508605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.508645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.519191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2948 00:26:45.599 [2024-07-13 07:12:53.520963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.521020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.526679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fd208 00:26:45.599 [2024-07-13 07:12:53.527476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.527504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.538152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fc560 00:26:45.599 [2024-07-13 07:12:53.539456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.539486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.548409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e1b48 00:26:45.599 [2024-07-13 07:12:53.549649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.549676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.559353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f31b8 00:26:45.599 [2024-07-13 07:12:53.560528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.560578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.569788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f7da8 00:26:45.599 [2024-07-13 07:12:53.570743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.570800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.579816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2510 00:26:45.599 [2024-07-13 07:12:53.580617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.580676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.589609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eea00 00:26:45.599 [2024-07-13 07:12:53.590215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.590247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.600074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fd640 00:26:45.599 [2024-07-13 07:12:53.601026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.601072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.609905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e1710 00:26:45.599 [2024-07-13 07:12:53.610900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.610946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.621279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e7c50 00:26:45.599 [2024-07-13 07:12:53.622351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.622395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.632164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ddc00 00:26:45.599 [2024-07-13 07:12:53.633397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.633441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.642177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f4f40 00:26:45.599 [2024-07-13 07:12:53.643343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.643387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.652253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1868 00:26:45.599 [2024-07-13 07:12:53.653253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.599 [2024-07-13 07:12:53.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.599 [2024-07-13 07:12:53.664501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2510 00:26:45.599 [2024-07-13 07:12:53.666149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.600 [2024-07-13 07:12:53.666195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.676227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fe2e8 00:26:45.858 [2024-07-13 07:12:53.678315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.678358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.684081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e5658 00:26:45.858 [2024-07-13 07:12:53.685069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.685110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.695112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e1b48 00:26:45.858 [2024-07-13 07:12:53.696054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.696097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.707485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e27f0 00:26:45.858 [2024-07-13 07:12:53.709159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.709189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.716770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f6020 00:26:45.858 [2024-07-13 07:12:53.717739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.717769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.726803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:45.858 [2024-07-13 07:12:53.728470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.728511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.738218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fbcf0 00:26:45.858 [2024-07-13 07:12:53.739765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.739796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.747940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f3e60 00:26:45.858 [2024-07-13 07:12:53.749198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.749239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.756508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fdeb0 00:26:45.858 [2024-07-13 07:12:53.757364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.767553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fdeb0 00:26:45.858 [2024-07-13 07:12:53.768344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.768385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.780289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1430 00:26:45.858 [2024-07-13 07:12:53.781995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.782043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.787895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ea680 00:26:45.858 [2024-07-13 07:12:53.788806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.788852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.798623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e7c50 00:26:45.858 [2024-07-13 07:12:53.799475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.799521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.811305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e01f8 00:26:45.858 [2024-07-13 07:12:53.812704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.812735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.820642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e2c28 00:26:45.858 [2024-07-13 07:12:53.822244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.822286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.832888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ee5c8 00:26:45.858 [2024-07-13 07:12:53.834570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.834614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.843250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fc998 00:26:45.858 [2024-07-13 07:12:53.844940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.844968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.852535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e1f80 00:26:45.858 [2024-07-13 07:12:53.853501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.853529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.863125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1ca0 00:26:45.858 [2024-07-13 07:12:53.864356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.864399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.872952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f92c0 00:26:45.858 [2024-07-13 07:12:53.874200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.858 [2024-07-13 07:12:53.885435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2510 00:26:45.858 [2024-07-13 07:12:53.887375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.858 [2024-07-13 07:12:53.887419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.859 [2024-07-13 07:12:53.893010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ebb98 00:26:45.859 [2024-07-13 07:12:53.894045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.859 [2024-07-13 07:12:53.894073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.859 [2024-07-13 07:12:53.903723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ed4e8 00:26:45.859 [2024-07-13 07:12:53.904779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.859 [2024-07-13 07:12:53.904819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.859 [2024-07-13 07:12:53.917031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f8618 00:26:45.859 [2024-07-13 07:12:53.918540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.859 [2024-07-13 07:12:53.918601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.859 [2024-07-13 07:12:53.926539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f7da8 00:26:45.859 [2024-07-13 07:12:53.927450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.859 [2024-07-13 07:12:53.927503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:53.940870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f7da8 00:26:46.117 [2024-07-13 07:12:53.942391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:53.942452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:53.952653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eee38 00:26:46.117 [2024-07-13 07:12:53.953943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:53.953993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:53.964376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eee38 00:26:46.117 [2024-07-13 07:12:53.965609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:53.965634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:53.977392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eee38 00:26:46.117 [2024-07-13 07:12:53.979254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:53.979283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:53.986609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1430 00:26:46.117 [2024-07-13 07:12:53.987523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:53.987579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:53.998051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f0bc0 00:26:46.117 [2024-07-13 07:12:53.999136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:53.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:54.008433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190dece0 00:26:46.117 [2024-07-13 07:12:54.009624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:54.009653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:54.018771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f5378 00:26:46.117 [2024-07-13 07:12:54.019942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.117 [2024-07-13 07:12:54.019983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:46.117 [2024-07-13 07:12:54.028991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ecc78 00:26:46.118 [2024-07-13 07:12:54.029983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.030024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.039427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fc560 00:26:46.118 [2024-07-13 07:12:54.040167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.040197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.050242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190eaef0 00:26:46.118 [2024-07-13 07:12:54.051367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.051409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.062292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f3a28 00:26:46.118 [2024-07-13 07:12:54.063827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.063856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.071168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e6300 00:26:46.118 [2024-07-13 07:12:54.072160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.072188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.083835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f3a28 00:26:46.118 [2024-07-13 07:12:54.085422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.085464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.093233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ec840 00:26:46.118 [2024-07-13 07:12:54.094240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.094269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.103966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ec840 00:26:46.118 [2024-07-13 07:12:54.104976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.105017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.114957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ec840 00:26:46.118 [2024-07-13 07:12:54.115957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.115998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.125630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ec840 00:26:46.118 [2024-07-13 07:12:54.126686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.126718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.138708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f3a28 00:26:46.118 [2024-07-13 07:12:54.140613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.140664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.147968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f1868 00:26:46.118 [2024-07-13 07:12:54.149008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.149049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.157575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:46.118 [2024-07-13 07:12:54.158807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.158834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.168949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:46.118 [2024-07-13 07:12:54.170088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.170128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:46.118 [2024-07-13 07:12:54.180445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:46.118 [2024-07-13 07:12:54.181770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.118 [2024-07-13 07:12:54.181798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.192929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:46.378 [2024-07-13 07:12:54.194123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.194164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.205064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:46.378 [2024-07-13 07:12:54.206307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.206335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.218362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e3060 00:26:46.378 [2024-07-13 07:12:54.220229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.220257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.227294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190feb58 00:26:46.378 [2024-07-13 07:12:54.228415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.228456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.238605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190feb58 00:26:46.378 [2024-07-13 07:12:54.239796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.239825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.249420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190feb58 00:26:46.378 [2024-07-13 07:12:54.250706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.250734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.259540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190fbcf0 00:26:46.378 [2024-07-13 07:12:54.260616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.260657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.269638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f8e88 00:26:46.378 [2024-07-13 07:12:54.270839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.270868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.280518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f8a50 00:26:46.378 [2024-07-13 07:12:54.281833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.378 [2024-07-13 07:12:54.281875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:46.378 [2024-07-13 07:12:54.291194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e8d30 00:26:46.378 [2024-07-13 07:12:54.292445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.292472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.301155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ea680 00:26:46.379 [2024-07-13 07:12:54.302306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.302334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.312536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f20d8 00:26:46.379 [2024-07-13 07:12:54.313807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.313848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.324051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e7c50 00:26:46.379 [2024-07-13 07:12:54.325409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.325450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.335085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e01f8 00:26:46.379 [2024-07-13 07:12:54.336442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.336472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.347730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e01f8 00:26:46.379 [2024-07-13 07:12:54.349097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.349139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.358048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f0350 00:26:46.379 [2024-07-13 07:12:54.359388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.359429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.369242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e4578 00:26:46.379 [2024-07-13 07:12:54.370380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.370420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.379882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e8088 00:26:46.379 [2024-07-13 07:12:54.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.390214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e4de8 00:26:46.379 [2024-07-13 07:12:54.391311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.391353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.403297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190e27f0 00:26:46.379 [2024-07-13 07:12:54.405079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.405123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.414579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190ea248 00:26:46.379 [2024-07-13 07:12:54.416409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.416453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.422469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2510 00:26:46.379 [2024-07-13 07:12:54.423559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.423586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:46.379 [2024-07-13 07:12:54.433851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f85200) with pdu=0x2000190f2510 00:26:46.379 [2024-07-13 07:12:54.434921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:46.379 [2024-07-13 07:12:54.434970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:46.379 00:26:46.379 Latency(us) 00:26:46.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.379 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:46.379 nvme0n1 : 2.00 23459.35 91.64 0.00 0.00 5448.22 2115.03 15490.33 00:26:46.379 =================================================================================================================== 00:26:46.379 Total : 23459.35 91.64 0.00 0.00 5448.22 2115.03 15490.33 00:26:46.379 0 00:26:46.639 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:46.639 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:46.639 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:46.639 | .driver_specific 00:26:46.639 | .nvme_error 00:26:46.639 | .status_code 00:26:46.639 | .command_transient_transport_error' 00:26:46.639 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 184 > 0 )) 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112592 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112592 ']' 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112592 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112592 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:46.898 killing process with pid 112592 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112592' 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112592 00:26:46.898 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.898 00:26:46.898 Latency(us) 00:26:46.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.898 =================================================================================================================== 00:26:46.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.898 07:12:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112592 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112684 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112684 /var/tmp/bperf.sock 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 112684 ']' 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.157 07:12:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.157 Zero copy mechanism will not be used. 00:26:47.157 [2024-07-13 07:12:55.105449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:47.157 [2024-07-13 07:12:55.105583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112684 ] 00:26:47.416 [2024-07-13 07:12:55.237801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.416 [2024-07-13 07:12:55.357564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.983 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.983 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:47.983 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:47.983 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:48.241 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:48.241 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.241 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.241 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.241 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.241 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.500 nvme0n1 00:26:48.760 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:48.760 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.760 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.760 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.760 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:48.760 07:12:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.760 Zero copy mechanism will not be used. 00:26:48.760 Running I/O for 2 seconds... 00:26:48.760 [2024-07-13 07:12:56.710641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.710939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.710991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.716348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.716644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.716666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.721815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.722082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.722110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.727284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.727574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.727603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.732725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.733005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.733032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.738229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.738518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.738565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.743830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.744111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.744140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.749756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.750038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.750065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.755654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.755979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.756007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.761455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.761756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.761792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.767210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.767495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.767522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.773099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.773367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.773402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.778857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.779153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.779179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.784786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.785068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.785095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.790750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.791066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.791094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.796446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.796756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.796797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.802084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.802352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.802381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.807767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.808049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.808076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.813087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.813368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.813395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.819069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.819348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.819375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.824717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.824997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.825023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.760 [2024-07-13 07:12:56.830261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:48.760 [2024-07-13 07:12:56.830610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-07-13 07:12:56.830640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.836344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.836658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.836685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.842133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.842428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.842455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.847725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.848006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.848033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.853147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.853426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.853453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.858794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.859101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.859128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.864336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.864626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.864654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.869854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.870120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.870147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.875419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.875708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.875731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.880838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.881126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.881147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.886313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.886615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.886643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.891771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.892064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.897251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.897532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.897578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.902889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.903184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.903213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.908333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.908612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.908639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.913901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.914195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.919504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.919794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.919822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.924856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.925124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.925151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.930558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.930857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.930885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.935976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.936255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.936282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.941510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.941821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.941849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.947164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.947441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.947468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.952606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.952888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.952914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.958093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.020 [2024-07-13 07:12:56.958372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.020 [2024-07-13 07:12:56.958401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.020 [2024-07-13 07:12:56.963850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.964131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.964159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:56.969431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.969726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.969753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:56.974901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.975196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.975224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:56.980328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.980619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.980647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:56.985795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.986075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.986103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:56.991223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.991502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.991531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:56.997019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:56.997299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:56.997327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.002978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.003256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.003284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.008716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.009002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.009043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.014711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.014985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.015012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.020356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.020665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.020698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.026185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.026462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.026514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.031968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.032245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.032275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.037615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.037897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.037925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.043065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.043343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.043369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.048529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.048819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.048846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.053804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.054084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.054112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.059166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.059443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.059473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.064400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.064678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.064706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.069684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.069962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.069989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.074994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.075273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.075299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.080356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.080647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.080675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.085788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.086054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.086081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-07-13 07:12:57.091429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.021 [2024-07-13 07:12:57.091738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-07-13 07:12:57.091765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.097267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.097558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.097585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.102902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.103212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.103238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.108213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.108492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.108519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.113665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.113945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.113970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.119122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.119387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.119414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.124454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.124743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.124773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.129632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.129911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.129937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.134985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.135265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.135292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.140381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.140698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.145661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.145941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.145968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.151003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.151285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.151313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.156341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.281 [2024-07-13 07:12:57.156633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.281 [2024-07-13 07:12:57.156660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.281 [2024-07-13 07:12:57.161690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.161968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.161996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.167107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.167384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.167411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.172480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.172761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.172788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.177704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.177971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.177999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.183083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.183347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.183374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.188386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.188676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.188705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.193691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.193967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.193994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.199107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.199387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.199415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.204483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.204762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.204790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.209808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.210116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.215222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.215487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.215511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.220441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.220718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.220752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.225662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.225939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.225966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.231060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.231338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.231366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.236526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.236835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.236861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.242311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.242632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.242660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.248194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.248477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.248512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.254166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.254444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.254472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.259952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.260230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.260258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.265778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.266057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.266084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.271758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.272040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.272066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.277272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.277560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.277587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.282846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.283120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.283147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.288055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.288331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.288358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.293424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.293712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.293739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.298780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.299045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.299073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.304140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.304418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.304446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.309502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.309810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.309837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.314800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.282 [2024-07-13 07:12:57.315097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.282 [2024-07-13 07:12:57.315123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.282 [2024-07-13 07:12:57.320150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.320417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.320445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.283 [2024-07-13 07:12:57.325492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.325767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.325794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.283 [2024-07-13 07:12:57.330774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.331042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.331069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.283 [2024-07-13 07:12:57.336064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.336342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.336369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.283 [2024-07-13 07:12:57.341400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.341691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.341718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.283 [2024-07-13 07:12:57.346729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.347008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.347035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.283 [2024-07-13 07:12:57.352178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.283 [2024-07-13 07:12:57.352442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.283 [2024-07-13 07:12:57.352469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.357935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.358212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.358239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.363726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.363976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.364002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.368930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.369209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.369237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.374341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.374637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.374660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.379726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.380021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.380048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.385142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.385418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.385446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.390416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.390758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.395850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.396114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.396141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.401241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.401519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.401546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.406647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.406931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.411928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.412209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.412236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.417196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.417476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.417502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.422540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.422855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.422882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.427797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.428063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.428090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.433103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.433367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.433394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.438376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.438675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.543 [2024-07-13 07:12:57.438703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.543 [2024-07-13 07:12:57.443815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.543 [2024-07-13 07:12:57.444079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.444107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.449120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.449399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.449426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.454443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.454749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.454777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.459726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.460004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.465156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.465436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.465464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.470442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.470753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.470781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.475762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.476026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.476053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.481068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.481348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.481375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.486477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.486799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.486827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.491852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.492115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.492141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.497078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.497355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.497382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.502396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.502714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.502742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.507667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.507933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.507961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.512915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.513175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.513202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.518199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.518478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.523547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.523839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.523865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.528791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.529058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.529085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.534254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.534559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.534598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.539677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.539958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.539984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.544945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.545219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.545246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.550283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.550599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.550627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.555744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.556022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.556049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.560985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.561265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.561291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.566221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.566484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.566542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.571768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.572024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.572058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.577055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.577323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.577350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.582445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.582764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.582794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.587707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.587988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.588014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.593045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.593327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.544 [2024-07-13 07:12:57.593355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.544 [2024-07-13 07:12:57.598425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.544 [2024-07-13 07:12:57.598758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.545 [2024-07-13 07:12:57.598794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.545 [2024-07-13 07:12:57.603735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.545 [2024-07-13 07:12:57.604004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.545 [2024-07-13 07:12:57.604031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.545 [2024-07-13 07:12:57.608975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.545 [2024-07-13 07:12:57.609255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.545 [2024-07-13 07:12:57.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.545 [2024-07-13 07:12:57.614438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.545 [2024-07-13 07:12:57.614744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.545 [2024-07-13 07:12:57.614773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.620202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.620484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.620512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.625754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.626035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.626063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.631156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.631437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.631466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.636423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.636717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.636752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.641862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.642147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.642175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.647269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.647540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.647578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.652635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.652920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.652969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.658037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.658318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.804 [2024-07-13 07:12:57.658347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.804 [2024-07-13 07:12:57.663688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.804 [2024-07-13 07:12:57.663974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.664000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.669061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.669344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.669371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.674336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.674638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.674666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.679667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.679949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.679975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.684989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.685271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.685297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.690304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.690605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.690632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.695651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.695919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.695946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.700927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.701224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.701251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.706354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.706656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.706683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.711859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.712141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.712162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.717338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.717632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.717654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.722762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.723064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.723090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.728047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.728329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.728356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.733361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.733642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.733691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.738789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.739089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.739117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.744154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.744437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.744464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.749545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.749874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.749902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.754956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.755238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.755265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.760265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.760560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.760587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.765608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.765917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.771061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.771343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.771369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.776414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.776708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.776737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.781814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.782097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.805 [2024-07-13 07:12:57.782124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.805 [2024-07-13 07:12:57.787175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.805 [2024-07-13 07:12:57.787457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.787483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.792517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.792815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.792843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.797780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.798060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.798086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.803235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.803516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.803543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.808533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.808828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.808855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.813841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.814122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.814149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.819401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.819696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.819719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.825162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.825446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.825473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.830827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.831134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.831164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.836431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.836749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.836777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.842286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.842632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.842662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.848218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.848515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.848543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.854011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.854311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.854338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.859746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.860052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.860079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.865041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.865340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.870548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.870835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.870863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.806 [2024-07-13 07:12:57.876200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:49.806 [2024-07-13 07:12:57.876483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.806 [2024-07-13 07:12:57.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.881959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.882245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.882272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.887713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.888014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.888043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.893050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.893336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.893365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.898408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.898759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.898789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.904174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.904470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.909541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.909855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.909883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.915017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.915307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.915335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.920628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.920929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.920957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.926059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.926358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.926387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.931686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.931985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.932012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.937093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.937391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.937420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.942555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.942873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.942903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.948037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.948335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.948364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.953498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.953814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.953843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.959033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.066 [2024-07-13 07:12:57.959314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.066 [2024-07-13 07:12:57.959343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.066 [2024-07-13 07:12:57.964696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:57.964983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:57.965012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:57.970645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:57.970940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:57.970966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:57.976370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:57.976682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:57.976722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:57.982362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:57.982671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:57.982713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:57.988151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:57.988450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:57.988479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:57.994025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:57.994312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:57.994339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:57.999913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.000230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.000258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.005756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.006058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.006080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.011640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.011939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.011967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.017328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.017645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.017674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.023063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.023351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.023379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.028947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.029250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.029279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.035081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.035365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.035393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.041081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.041365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.041393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.047014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.047333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.047361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.052904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.053250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.053279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.058823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.059096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.059130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.064466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.064790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.064822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.070002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.070302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.070331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.075728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.076029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.076062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.081102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.081389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.081417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.086678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.086967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.086998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.092323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.092639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.067 [2024-07-13 07:12:58.092664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.067 [2024-07-13 07:12:58.098128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.067 [2024-07-13 07:12:58.098430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.098460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.103801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.104095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.104124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.109475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.109786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.109815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.115137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.115435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.115475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.120681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.120999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.121027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.126065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.126365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.126393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.131526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.131841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.131879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.068 [2024-07-13 07:12:58.137264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.068 [2024-07-13 07:12:58.137593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.068 [2024-07-13 07:12:58.137623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.143256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.143545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.143585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.149025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.149311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.149348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.154804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.155141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.155179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.160446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.160766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.160801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.166141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.166440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.166478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.171840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.172132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.172159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.177183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.177479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.177521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.182615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.182893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.182916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.188084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.188384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.193631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.193933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.193968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.199060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.199375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.199417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.204846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.205148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.205178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.210221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.210515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.210559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.215783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.216066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.216094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.221136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.221463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.226599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.226890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.226928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.231914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.232212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.232240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.237323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.237633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.237655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.242757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.243044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.243076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.247973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.248274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.248311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.253761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.254070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.254099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.259546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.259861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.259895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.265341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.265657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.265684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.271557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.271904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.271933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.277450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.277736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.277757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.283189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.283523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.328 [2024-07-13 07:12:58.283575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.328 [2024-07-13 07:12:58.288974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.328 [2024-07-13 07:12:58.289279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.289301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.294596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.294872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.294901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.299913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.300216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.300244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.305401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.305715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.305744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.310877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.311175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.311206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.316407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.316704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.321836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.322133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.322171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.327380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.327695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.327726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.332815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.333117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.333146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.338170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.338469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.338530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.343680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.343999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.344030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.348952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.349251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.349283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.354517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.354847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.354879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.359861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.360147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.360176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.365249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.365539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.365587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.370645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.370939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.370967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.375949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.376251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.376279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.381310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.381652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.387016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.387332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.387365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.392467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.392773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.392805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.329 [2024-07-13 07:12:58.397858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.329 [2024-07-13 07:12:58.398206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.329 [2024-07-13 07:12:58.398234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.403761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.404047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.404076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.409413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.409723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.409770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.414913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.415233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.415261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.420392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.420710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.420739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.425737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.426039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.426071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.431147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.431448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.431485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.436675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.436960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.436992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.442001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.442301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.442329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.447446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.447783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.447813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.452881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.453187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.453217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.458366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.458695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.458728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.463957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.464258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.464290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.469478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.469791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.469820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.475031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.589 [2024-07-13 07:12:58.475329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.589 [2024-07-13 07:12:58.475350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.589 [2024-07-13 07:12:58.480466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.480765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.480794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.485830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.486128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.486161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.491269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.491577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.491603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.496624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.496892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.496927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.501941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.502239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.502270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.507373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.507671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.507696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.512780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.513051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.513082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.518055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.518353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.518381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.523398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.523715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.523744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.528749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.529030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.529077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.534057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.534359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.539500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.539812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.539841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.544832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.545130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.545152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.550137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.550432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.550470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.555655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.555953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.556000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.560914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.561197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.561233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.566283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.566640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.566674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.571589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.571900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.571929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.576840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.577143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.577171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.582182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.582467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.582518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.587776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.588074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.588107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.593007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.593304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.593335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.598334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.598656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.598696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.603763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.604060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.604093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.609012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.609310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.609338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.614302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.614606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.614635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.619648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.619934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.619975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.624922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.625221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.625248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.630216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.630512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.630563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.635625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.635922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.635955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.640890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.641186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.641221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.646199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.646495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.646536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.651608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.651920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.651955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.590 [2024-07-13 07:12:58.656920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.590 [2024-07-13 07:12:58.657229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.590 [2024-07-13 07:12:58.657269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.662954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.663256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.663284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.668766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.669065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.674152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.674449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.674480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.679547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.679874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.679902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.684943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.685245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.685279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.690233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.690542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.690590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:50.848 [2024-07-13 07:12:58.695628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f853a0) with pdu=0x2000190fef90 00:26:50.848 [2024-07-13 07:12:58.695928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:50.848 [2024-07-13 07:12:58.695964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:50.848 00:26:50.848 Latency(us) 00:26:50.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.848 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:50.848 nvme0n1 : 2.00 5616.75 702.09 0.00 0.00 2843.05 2189.50 10962.39 00:26:50.848 =================================================================================================================== 00:26:50.848 Total : 5616.75 702.09 0.00 0.00 2843.05 2189.50 10962.39 00:26:50.848 0 00:26:50.848 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:50.848 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:50.848 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:50.848 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:50.848 | .driver_specific 00:26:50.848 | .nvme_error 00:26:50.848 | .status_code 00:26:50.848 | .command_transient_transport_error' 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 362 > 0 )) 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112684 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112684 ']' 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112684 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112684 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:51.106 killing process with pid 112684 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112684' 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112684 00:26:51.106 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.106 00:26:51.106 Latency(us) 00:26:51.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.106 =================================================================================================================== 00:26:51.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.106 07:12:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112684 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 112373 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 112373 ']' 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 112373 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112373 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.364 killing process with pid 112373 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112373' 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 112373 00:26:51.364 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 112373 00:26:51.621 00:26:51.621 real 0m18.673s 00:26:51.621 user 0m34.963s 00:26:51.621 sys 0m5.219s 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:51.621 ************************************ 00:26:51.621 END TEST nvmf_digest_error 00:26:51.621 ************************************ 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.621 rmmod nvme_tcp 00:26:51.621 rmmod nvme_fabrics 00:26:51.621 rmmod nvme_keyring 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 112373 ']' 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 112373 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 112373 ']' 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 112373 00:26:51.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (112373) - No such process 00:26:51.621 Process with pid 112373 is not found 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 112373 is not found' 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.621 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.879 07:12:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:51.879 00:26:51.879 real 0m37.955s 00:26:51.879 user 1m9.277s 00:26:51.879 sys 0m10.768s 00:26:51.880 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.880 ************************************ 00:26:51.880 END TEST nvmf_digest 00:26:51.880 ************************************ 00:26:51.880 07:12:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.880 07:12:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:51.880 07:12:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:26:51.880 07:12:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:26:51.880 07:12:59 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:51.880 07:12:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:51.880 07:12:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.880 07:12:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.880 ************************************ 00:26:51.880 START TEST nvmf_mdns_discovery 00:26:51.880 ************************************ 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:51.880 * Looking for test storage... 00:26:51.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:51.880 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:51.881 Cannot find device "nvmf_tgt_br" 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:51.881 Cannot find device "nvmf_tgt_br2" 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:51.881 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:52.140 Cannot find device "nvmf_tgt_br" 00:26:52.140 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:26:52.140 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:52.140 Cannot find device "nvmf_tgt_br2" 00:26:52.140 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:26:52.140 07:12:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:52.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:52.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:52.140 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:52.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:26:52.399 00:26:52.399 --- 10.0.0.2 ping statistics --- 00:26:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.399 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:52.399 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:52.399 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:26:52.399 00:26:52.399 --- 10.0.0.3 ping statistics --- 00:26:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.399 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:52.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:52.399 00:26:52.399 --- 10.0.0.1 ping statistics --- 00:26:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.399 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=112981 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 112981 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 112981 ']' 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.399 07:13:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.399 [2024-07-13 07:13:00.313912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:52.399 [2024-07-13 07:13:00.314024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.399 [2024-07-13 07:13:00.456880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.657 [2024-07-13 07:13:00.582147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.657 [2024-07-13 07:13:00.582235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.657 [2024-07-13 07:13:00.582255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.657 [2024-07-13 07:13:00.582267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.657 [2024-07-13 07:13:00.582277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.657 [2024-07-13 07:13:00.582321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.223 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.481 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.481 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:26:53.481 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.481 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.481 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 [2024-07-13 07:13:01.433586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 [2024-07-13 07:13:01.441712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 null0 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 null1 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 null2 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 null3 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=113031 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 113031 /tmp/host.sock 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 113031 ']' 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:53.482 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.482 07:13:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.482 [2024-07-13 07:13:01.537222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:53.482 [2024-07-13 07:13:01.537307] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113031 ] 00:26:53.740 [2024-07-13 07:13:01.668875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.740 [2024-07-13 07:13:01.772717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=113060 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:26:54.677 07:13:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:26:54.677 Process 982 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:26:54.677 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:26:54.677 Successfully dropped root privileges. 00:26:54.677 avahi-daemon 0.8 starting up. 00:26:54.677 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:26:54.677 Successfully called chroot(). 00:26:54.677 Successfully dropped remaining capabilities. 00:26:54.677 No service file found in /etc/avahi/services. 00:26:55.610 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:26:55.610 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:26:55.610 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:26:55.610 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:26:55.610 Network interface enumeration completed. 00:26:55.610 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:26:55.610 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:26:55.610 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:26:55.610 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:26:55.610 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1880386105. 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.610 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:26:55.869 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 [2024-07-13 07:13:03.978882] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:26:56.128 07:13:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 [2024-07-13 07:13:04.054870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 [2024-07-13 07:13:04.094756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 [2024-07-13 07:13:04.102742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.128 07:13:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:26:57.061 [2024-07-13 07:13:04.878887] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:57.627 [2024-07-13 07:13:05.478896] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:57.627 [2024-07-13 07:13:05.478962] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:26:57.627 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:57.627 cookie is 0 00:26:57.627 is_local: 1 00:26:57.627 our_own: 0 00:26:57.627 wide_area: 0 00:26:57.627 multicast: 1 00:26:57.627 cached: 1 00:26:57.627 [2024-07-13 07:13:05.578880] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:57.627 [2024-07-13 07:13:05.578907] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:26:57.627 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:57.627 cookie is 0 00:26:57.627 is_local: 1 00:26:57.627 our_own: 0 00:26:57.627 wide_area: 0 00:26:57.627 multicast: 1 00:26:57.627 cached: 1 00:26:57.627 [2024-07-13 07:13:05.578918] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:26:57.627 [2024-07-13 07:13:05.678881] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:26:57.627 [2024-07-13 07:13:05.678906] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:26:57.627 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:57.627 cookie is 0 00:26:57.627 is_local: 1 00:26:57.627 our_own: 0 00:26:57.627 wide_area: 0 00:26:57.627 multicast: 1 00:26:57.627 cached: 1 00:26:57.886 [2024-07-13 07:13:05.778878] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:26:57.886 [2024-07-13 07:13:05.778909] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:26:57.886 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:26:57.886 cookie is 0 00:26:57.886 is_local: 1 00:26:57.886 our_own: 0 00:26:57.886 wide_area: 0 00:26:57.886 multicast: 1 00:26:57.886 cached: 1 00:26:57.886 [2024-07-13 07:13:05.778919] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:26:58.452 [2024-07-13 07:13:06.487901] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:58.452 [2024-07-13 07:13:06.487950] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:58.452 [2024-07-13 07:13:06.487978] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:58.712 [2024-07-13 07:13:06.574022] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:26:58.712 [2024-07-13 07:13:06.631253] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:58.712 [2024-07-13 07:13:06.631282] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:58.712 [2024-07-13 07:13:06.687454] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.712 [2024-07-13 07:13:06.687477] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.712 [2024-07-13 07:13:06.687494] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.712 [2024-07-13 07:13:06.775568] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:26:58.970 [2024-07-13 07:13:06.838435] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:58.970 [2024-07-13 07:13:06.838463] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:01.496 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.497 07:13:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.869 [2024-07-13 07:13:10.649196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.869 [2024-07-13 07:13:10.650430] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:02.869 [2024-07-13 07:13:10.650468] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:02.869 [2024-07-13 07:13:10.650511] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:02.869 [2024-07-13 07:13:10.650531] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.869 [2024-07-13 07:13:10.657031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:02.869 [2024-07-13 07:13:10.657382] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:02.869 [2024-07-13 07:13:10.657425] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.869 07:13:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:02.869 [2024-07-13 07:13:10.788477] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:02.869 [2024-07-13 07:13:10.788688] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:02.869 [2024-07-13 07:13:10.851774] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:02.869 [2024-07-13 07:13:10.851799] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:02.869 [2024-07-13 07:13:10.851815] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:02.869 [2024-07-13 07:13:10.851831] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:02.869 [2024-07-13 07:13:10.851869] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:02.869 [2024-07-13 07:13:10.851877] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:02.869 [2024-07-13 07:13:10.851883] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:02.869 [2024-07-13 07:13:10.851895] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:02.869 [2024-07-13 07:13:10.898569] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:02.869 [2024-07-13 07:13:10.898589] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:02.869 [2024-07-13 07:13:10.898627] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:02.869 [2024-07-13 07:13:10.898635] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:03.849 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.110 [2024-07-13 07:13:11.978036] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:04.110 [2024-07-13 07:13:11.978100] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:04.110 [2024-07-13 07:13:11.978133] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:04.110 [2024-07-13 07:13:11.978146] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:04.110 [2024-07-13 07:13:11.978305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.978340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.978362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.978371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.978388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.978396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.978404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.978412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.978421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.110 [2024-07-13 07:13:11.988244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.110 [2024-07-13 07:13:11.990045] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:04.110 [2024-07-13 07:13:11.990091] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:04.110 [2024-07-13 07:13:11.991284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.991315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.991328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.991336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.991346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.991354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.991364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.110 [2024-07-13 07:13:11.991372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.110 [2024-07-13 07:13:11.991381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.110 07:13:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:04.110 [2024-07-13 07:13:11.998283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.110 [2024-07-13 07:13:11.998424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.110 [2024-07-13 07:13:11.998446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.110 [2024-07-13 07:13:11.998456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.110 [2024-07-13 07:13:11.998478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.110 [2024-07-13 07:13:11.998493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.110 [2024-07-13 07:13:11.998510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.110 [2024-07-13 07:13:11.998530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.110 [2024-07-13 07:13:11.998545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.110 [2024-07-13 07:13:12.001242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.110 [2024-07-13 07:13:12.008351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.110 [2024-07-13 07:13:12.008426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.110 [2024-07-13 07:13:12.008444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.110 [2024-07-13 07:13:12.008454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.110 [2024-07-13 07:13:12.008468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.110 [2024-07-13 07:13:12.008482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.110 [2024-07-13 07:13:12.008491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.110 [2024-07-13 07:13:12.008499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.110 [2024-07-13 07:13:12.008512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.110 [2024-07-13 07:13:12.011253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.110 [2024-07-13 07:13:12.011327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.110 [2024-07-13 07:13:12.011345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.110 [2024-07-13 07:13:12.011355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.110 [2024-07-13 07:13:12.011369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.110 [2024-07-13 07:13:12.011382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.110 [2024-07-13 07:13:12.011392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.110 [2024-07-13 07:13:12.011400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.110 [2024-07-13 07:13:12.011413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.110 [2024-07-13 07:13:12.018399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.110 [2024-07-13 07:13:12.018469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.110 [2024-07-13 07:13:12.018487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.110 [2024-07-13 07:13:12.018496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.110 [2024-07-13 07:13:12.018519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.018533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.018542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.018561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.111 [2024-07-13 07:13:12.018577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.021299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.111 [2024-07-13 07:13:12.021375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.021393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.111 [2024-07-13 07:13:12.021402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.021418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.021432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.021441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.021450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.111 [2024-07-13 07:13:12.021463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.028444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.111 [2024-07-13 07:13:12.028522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.028540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.111 [2024-07-13 07:13:12.028559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.028577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.028590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.028600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.028608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.111 [2024-07-13 07:13:12.028620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.031342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.111 [2024-07-13 07:13:12.031410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.031427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.111 [2024-07-13 07:13:12.031437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.031451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.031464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.031473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.031482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.111 [2024-07-13 07:13:12.031495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.038491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.111 [2024-07-13 07:13:12.038592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.038612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.111 [2024-07-13 07:13:12.038622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.038637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.038651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.038660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.038668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.111 [2024-07-13 07:13:12.038681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.041386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.111 [2024-07-13 07:13:12.041454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.041472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.111 [2024-07-13 07:13:12.041482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.041496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.041509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.041518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.041527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.111 [2024-07-13 07:13:12.041540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.048559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.111 [2024-07-13 07:13:12.048630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.048648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.111 [2024-07-13 07:13:12.048657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.048672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.048685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.048694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.048702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.111 [2024-07-13 07:13:12.048715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.051429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.111 [2024-07-13 07:13:12.051497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.051515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.111 [2024-07-13 07:13:12.051525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.051539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.051572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.051584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.051592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.111 [2024-07-13 07:13:12.051605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.058604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.111 [2024-07-13 07:13:12.058674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.058692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.111 [2024-07-13 07:13:12.058702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.058716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.058729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.058737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.058745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.111 [2024-07-13 07:13:12.058758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.061472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.111 [2024-07-13 07:13:12.061540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.061569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.111 [2024-07-13 07:13:12.061580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.061595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.061608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.061617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.061625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.111 [2024-07-13 07:13:12.061638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.068650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.111 [2024-07-13 07:13:12.068720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.068738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.111 [2024-07-13 07:13:12.068748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.111 [2024-07-13 07:13:12.068763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.111 [2024-07-13 07:13:12.068775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.111 [2024-07-13 07:13:12.068784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.111 [2024-07-13 07:13:12.068792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.111 [2024-07-13 07:13:12.068805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.111 [2024-07-13 07:13:12.071513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.111 [2024-07-13 07:13:12.071590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.111 [2024-07-13 07:13:12.071608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.111 [2024-07-13 07:13:12.071618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.071642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.071657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.071671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.071679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.112 [2024-07-13 07:13:12.071692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.078696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.112 [2024-07-13 07:13:12.078773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.078791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.112 [2024-07-13 07:13:12.078801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.078816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.078829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.078837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.078845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.112 [2024-07-13 07:13:12.078864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.081567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.112 [2024-07-13 07:13:12.081642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.081661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.112 [2024-07-13 07:13:12.081671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.081685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.081698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.081708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.081716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.112 [2024-07-13 07:13:12.081729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.088744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.112 [2024-07-13 07:13:12.088823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.088841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.112 [2024-07-13 07:13:12.088851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.088865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.088879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.088888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.088896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.112 [2024-07-13 07:13:12.088909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.091614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.112 [2024-07-13 07:13:12.091683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.091700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.112 [2024-07-13 07:13:12.091710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.091723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.091737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.091746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.091755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.112 [2024-07-13 07:13:12.091767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.098789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.112 [2024-07-13 07:13:12.098860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.098877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.112 [2024-07-13 07:13:12.098887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.098901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.098916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.098924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.098933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.112 [2024-07-13 07:13:12.098945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.101657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.112 [2024-07-13 07:13:12.101725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.101742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.112 [2024-07-13 07:13:12.101752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.101766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.101779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.101788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.101796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.112 [2024-07-13 07:13:12.101809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.108835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.112 [2024-07-13 07:13:12.108904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.108921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.112 [2024-07-13 07:13:12.108931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.108946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.108960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.108968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.108977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.112 [2024-07-13 07:13:12.108989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.111700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:04.112 [2024-07-13 07:13:12.111768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.111786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba9820 with addr=10.0.0.3, port=4420 00:27:04.112 [2024-07-13 07:13:12.111796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba9820 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.111810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba9820 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.111823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.111832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.111840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:04.112 [2024-07-13 07:13:12.111855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.118881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.112 [2024-07-13 07:13:12.118960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.112 [2024-07-13 07:13:12.118977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcdca0 with addr=10.0.0.2, port=4420 00:27:04.112 [2024-07-13 07:13:12.118987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcdca0 is same with the state(5) to be set 00:27:04.112 [2024-07-13 07:13:12.119002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcdca0 (9): Bad file descriptor 00:27:04.112 [2024-07-13 07:13:12.119014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.112 [2024-07-13 07:13:12.119023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.112 [2024-07-13 07:13:12.119031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.112 [2024-07-13 07:13:12.119043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.112 [2024-07-13 07:13:12.121145] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:04.112 [2024-07-13 07:13:12.121173] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:04.112 [2024-07-13 07:13:12.121190] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:04.112 [2024-07-13 07:13:12.121221] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:04.112 [2024-07-13 07:13:12.121236] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:04.112 [2024-07-13 07:13:12.121247] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:04.372 [2024-07-13 07:13:12.207225] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:04.372 [2024-07-13 07:13:12.207279] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:04.941 07:13:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:04.941 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:04.941 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:04.941 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.941 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:04.941 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.941 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.203 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.461 07:13:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:05.461 [2024-07-13 07:13:13.378920] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.395 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:06.396 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:06.654 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:06.655 [2024-07-13 07:13:14.532433] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:06.655 2024/07/13 07:13:14 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:06.655 request: 00:27:06.655 { 00:27:06.655 "method": "bdev_nvme_start_mdns_discovery", 00:27:06.655 "params": { 00:27:06.655 "name": "mdns", 00:27:06.655 "svcname": "_nvme-disc._http", 00:27:06.655 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:06.655 } 00:27:06.655 } 00:27:06.655 Got JSON-RPC error response 00:27:06.655 GoRPCClient: error on JSON-RPC call 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:06.655 07:13:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:07.222 [2024-07-13 07:13:15.121285] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:07.222 [2024-07-13 07:13:15.221279] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:07.480 [2024-07-13 07:13:15.321288] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:07.480 [2024-07-13 07:13:15.321322] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:07.480 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:07.480 cookie is 0 00:27:07.480 is_local: 1 00:27:07.480 our_own: 0 00:27:07.480 wide_area: 0 00:27:07.480 multicast: 1 00:27:07.480 cached: 1 00:27:07.480 [2024-07-13 07:13:15.421288] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:07.480 [2024-07-13 07:13:15.421315] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:27:07.480 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:07.480 cookie is 0 00:27:07.480 is_local: 1 00:27:07.480 our_own: 0 00:27:07.480 wide_area: 0 00:27:07.480 multicast: 1 00:27:07.480 cached: 1 00:27:07.480 [2024-07-13 07:13:15.421336] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:07.480 [2024-07-13 07:13:15.521287] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:07.480 [2024-07-13 07:13:15.521320] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:07.480 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:07.480 cookie is 0 00:27:07.480 is_local: 1 00:27:07.480 our_own: 0 00:27:07.480 wide_area: 0 00:27:07.480 multicast: 1 00:27:07.480 cached: 1 00:27:07.737 [2024-07-13 07:13:15.621287] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:07.737 [2024-07-13 07:13:15.621310] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:27:07.737 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:07.737 cookie is 0 00:27:07.737 is_local: 1 00:27:07.737 our_own: 0 00:27:07.737 wide_area: 0 00:27:07.737 multicast: 1 00:27:07.737 cached: 1 00:27:07.737 [2024-07-13 07:13:15.621329] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:08.303 [2024-07-13 07:13:16.329532] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:08.303 [2024-07-13 07:13:16.329584] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:08.303 [2024-07-13 07:13:16.329602] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:08.562 [2024-07-13 07:13:16.416635] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:08.562 [2024-07-13 07:13:16.477093] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:08.562 [2024-07-13 07:13:16.477123] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:08.562 [2024-07-13 07:13:16.529191] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:08.562 [2024-07-13 07:13:16.529213] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:08.562 [2024-07-13 07:13:16.529229] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:08.562 [2024-07-13 07:13:16.615293] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:08.820 [2024-07-13 07:13:16.675098] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:08.820 [2024-07-13 07:13:16.675124] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:12.106 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 [2024-07-13 07:13:19.715212] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:12.107 2024/07/13 07:13:19 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:12.107 request: 00:27:12.107 { 00:27:12.107 "method": "bdev_nvme_start_mdns_discovery", 00:27:12.107 "params": { 00:27:12.107 "name": "cdc", 00:27:12.107 "svcname": "_nvme-disc._tcp", 00:27:12.107 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:12.107 } 00:27:12.107 } 00:27:12.107 Got JSON-RPC error response 00:27:12.107 GoRPCClient: error on JSON-RPC call 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 113031 00:27:12.107 07:13:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 113031 00:27:12.107 [2024-07-13 07:13:19.984311] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 113060 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:27:12.107 Got SIGTERM, quitting. 00:27:12.107 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:12.107 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:12.107 avahi-daemon 0.8 exiting. 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.107 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.107 rmmod nvme_tcp 00:27:12.107 rmmod nvme_fabrics 00:27:12.366 rmmod nvme_keyring 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 112981 ']' 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 112981 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 112981 ']' 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 112981 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112981 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:12.366 killing process with pid 112981 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112981' 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 112981 00:27:12.366 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 112981 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:12.627 ************************************ 00:27:12.627 END TEST nvmf_mdns_discovery 00:27:12.627 ************************************ 00:27:12.627 00:27:12.627 real 0m20.774s 00:27:12.627 user 0m40.551s 00:27:12.627 sys 0m2.135s 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.627 07:13:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:12.627 07:13:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:12.627 07:13:20 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:27:12.627 07:13:20 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:12.627 07:13:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:12.627 07:13:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.627 07:13:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:12.627 ************************************ 00:27:12.627 START TEST nvmf_host_multipath 00:27:12.627 ************************************ 00:27:12.627 07:13:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:12.627 * Looking for test storage... 00:27:12.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:12.627 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:12.627 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:12.890 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.890 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:12.891 Cannot find device "nvmf_tgt_br" 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:12.891 Cannot find device "nvmf_tgt_br2" 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:12.891 Cannot find device "nvmf_tgt_br" 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:12.891 Cannot find device "nvmf_tgt_br2" 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:12.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:12.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:12.891 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:13.150 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:13.150 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:13.150 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:13.150 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:13.150 07:13:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:13.150 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:13.150 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:13.150 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:13.150 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:13.150 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:13.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:27:13.150 00:27:13.150 --- 10.0.0.2 ping statistics --- 00:27:13.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.151 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:13.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:13.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:27:13.151 00:27:13.151 --- 10.0.0.3 ping statistics --- 00:27:13.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.151 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:13.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:27:13.151 00:27:13.151 --- 10.0.0.1 ping statistics --- 00:27:13.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.151 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113617 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113617 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113617 ']' 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.151 07:13:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:13.151 [2024-07-13 07:13:21.144304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:13.151 [2024-07-13 07:13:21.145370] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.409 [2024-07-13 07:13:21.284505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:13.409 [2024-07-13 07:13:21.381237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.409 [2024-07-13 07:13:21.381309] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.409 [2024-07-13 07:13:21.381319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.409 [2024-07-13 07:13:21.381327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.409 [2024-07-13 07:13:21.381333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.409 [2024-07-13 07:13:21.381497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.409 [2024-07-13 07:13:21.381505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113617 00:27:14.346 07:13:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:14.604 [2024-07-13 07:13:22.454969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.604 07:13:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:14.862 Malloc0 00:27:14.862 07:13:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:15.120 07:13:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.377 07:13:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.377 [2024-07-13 07:13:23.414752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.377 07:13:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:15.636 [2024-07-13 07:13:23.690802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=113715 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 113715 /var/tmp/bdevperf.sock 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 113715 ']' 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:15.893 07:13:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.894 07:13:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:15.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:15.894 07:13:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.894 07:13:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:16.828 07:13:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.828 07:13:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:27:16.828 07:13:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:17.086 07:13:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:17.344 Nvme0n1 00:27:17.344 07:13:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:17.602 Nvme0n1 00:27:17.602 07:13:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:17.602 07:13:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:18.979 07:13:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:18.979 07:13:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:18.979 07:13:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.236 07:13:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:19.236 07:13:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:19.236 07:13:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113807 00:27:19.236 07:13:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:25.848 Attaching 4 probes... 00:27:25.848 @path[10.0.0.2, 4421]: 19997 00:27:25.848 @path[10.0.0.2, 4421]: 20153 00:27:25.848 @path[10.0.0.2, 4421]: 20069 00:27:25.848 @path[10.0.0.2, 4421]: 20064 00:27:25.848 @path[10.0.0.2, 4421]: 19867 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113807 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:25.848 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:26.105 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:26.105 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113933 00:27:26.105 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:26.105 07:13:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:32.672 07:13:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:32.672 07:13:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:32.672 Attaching 4 probes... 00:27:32.672 @path[10.0.0.2, 4420]: 19474 00:27:32.672 @path[10.0.0.2, 4420]: 19803 00:27:32.672 @path[10.0.0.2, 4420]: 20135 00:27:32.672 @path[10.0.0.2, 4420]: 20231 00:27:32.672 @path[10.0.0.2, 4420]: 20033 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:32.672 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113933 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114064 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:32.673 07:13:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:39.238 Attaching 4 probes... 00:27:39.238 @path[10.0.0.2, 4421]: 14846 00:27:39.238 @path[10.0.0.2, 4421]: 19631 00:27:39.238 @path[10.0.0.2, 4421]: 20062 00:27:39.238 @path[10.0.0.2, 4421]: 20018 00:27:39.238 @path[10.0.0.2, 4421]: 19939 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114064 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:39.238 07:13:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:39.238 07:13:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:39.496 07:13:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:39.496 07:13:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114195 00:27:39.496 07:13:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:39.496 07:13:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:46.058 Attaching 4 probes... 00:27:46.058 00:27:46.058 00:27:46.058 00:27:46.058 00:27:46.058 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114195 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:46.058 07:13:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:46.058 07:13:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:46.316 07:13:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:46.316 07:13:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:46.316 07:13:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114324 00:27:46.316 07:13:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:52.882 Attaching 4 probes... 00:27:52.882 @path[10.0.0.2, 4421]: 19183 00:27:52.882 @path[10.0.0.2, 4421]: 19417 00:27:52.882 @path[10.0.0.2, 4421]: 19359 00:27:52.882 @path[10.0.0.2, 4421]: 19404 00:27:52.882 @path[10.0.0.2, 4421]: 19250 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:52.882 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:52.883 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:52.883 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114324 00:27:52.883 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:52.883 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:52.883 [2024-07-13 07:14:00.741542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.741993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 [2024-07-13 07:14:00.742329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2372240 is same with the state(5) to be set 00:27:52.883 07:14:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:27:53.821 07:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:27:53.821 07:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114450 00:27:53.821 07:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:53.821 07:14:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:00.388 07:14:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:00.388 07:14:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:00.388 Attaching 4 probes... 00:28:00.388 @path[10.0.0.2, 4420]: 18604 00:28:00.388 @path[10.0.0.2, 4420]: 19116 00:28:00.388 @path[10.0.0.2, 4420]: 19299 00:28:00.388 @path[10.0.0.2, 4420]: 19510 00:28:00.388 @path[10.0.0.2, 4420]: 19177 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114450 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:00.388 [2024-07-13 07:14:08.334199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:00.388 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:00.646 07:14:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:07.218 07:14:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:07.218 07:14:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114643 00:28:07.218 07:14:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113617 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:07.218 07:14:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.804 Attaching 4 probes... 00:28:13.804 @path[10.0.0.2, 4421]: 19295 00:28:13.804 @path[10.0.0.2, 4421]: 19521 00:28:13.804 @path[10.0.0.2, 4421]: 19604 00:28:13.804 @path[10.0.0.2, 4421]: 19457 00:28:13.804 @path[10.0.0.2, 4421]: 18870 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114643 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 113715 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113715 ']' 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113715 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113715 00:28:13.804 killing process with pid 113715 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113715' 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113715 00:28:13.804 07:14:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113715 00:28:13.804 Connection closed with partial response: 00:28:13.804 00:28:13.804 00:28:13.804 07:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 113715 00:28:13.804 07:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:13.804 [2024-07-13 07:13:23.753297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:13.804 [2024-07-13 07:13:23.753417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113715 ] 00:28:13.804 [2024-07-13 07:13:23.890640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.804 [2024-07-13 07:13:23.981555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.804 Running I/O for 90 seconds... 00:28:13.804 [2024-07-13 07:13:33.952555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.804 [2024-07-13 07:13:33.952667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.804 [2024-07-13 07:13:33.952727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.804 [2024-07-13 07:13:33.952750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.804 [2024-07-13 07:13:33.952773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.804 [2024-07-13 07:13:33.952788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.804 [2024-07-13 07:13:33.952809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.804 [2024-07-13 07:13:33.952825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.804 [2024-07-13 07:13:33.952847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.804 [2024-07-13 07:13:33.952862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.804 [2024-07-13 07:13:33.952882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.804 [2024-07-13 07:13:33.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.952959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.805 [2024-07-13 07:13:33.952973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.952992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.805 [2024-07-13 07:13:33.953005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.953950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.953999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.805 [2024-07-13 07:13:33.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.954956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.954984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.805 [2024-07-13 07:13:33.955331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.805 [2024-07-13 07:13:33.955359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.955969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.955989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.806 [2024-07-13 07:13:33.956843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.806 [2024-07-13 07:13:33.956863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.956877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.956897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.956912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.956945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.956983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.957003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.957016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.957035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.957048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.957066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.957079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.957106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.957120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.807 [2024-07-13 07:13:33.958043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:33.958908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:33.958937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.462973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.807 [2024-07-13 07:13:40.462992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.807 [2024-07-13 07:13:40.463005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.463960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.463989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.808 [2024-07-13 07:13:40.464394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.808 [2024-07-13 07:13:40.464408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.464978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.464992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.809 [2024-07-13 07:13:40.465452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.465867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.465881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.466589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.466649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.466666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.809 [2024-07-13 07:13:40.466697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.809 [2024-07-13 07:13:40.466712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.466740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.466755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.466798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.466815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.466843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.466858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.466886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.466917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.466944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.466973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.467028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.467068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.467108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.467147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.467188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:40.467227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:40.467906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:40.467922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.810 [2024-07-13 07:13:47.516084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.810 [2024-07-13 07:13:47.516715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.810 [2024-07-13 07:13:47.516729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.516750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.516786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.516801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.517967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.517987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.811 [2024-07-13 07:13:47.518519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.811 [2024-07-13 07:13:47.518538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.518938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.518952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.520975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.520989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.521009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.521043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.521056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.521076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.521105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.521125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.521138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.521158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.812 [2024-07-13 07:13:47.522417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.812 [2024-07-13 07:13:47.522918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.812 [2024-07-13 07:13:47.522948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.522982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.522995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.523984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.523998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.813 [2024-07-13 07:13:47.524590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.813 [2024-07-13 07:13:47.524613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.524970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.524989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.525603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.525630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.526967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.526996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.527015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.814 [2024-07-13 07:13:47.527028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.814 [2024-07-13 07:13:47.527046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.527968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.527987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.815 [2024-07-13 07:13:47.528272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.528337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.539416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.539463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.539498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.539519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.539548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.539603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.539633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.539653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.539681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.539701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.815 [2024-07-13 07:13:47.539729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.815 [2024-07-13 07:13:47.539749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.539778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.539798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.539826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.539846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.539893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.539915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.539944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.539963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.539991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.540010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.540039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.540059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.541947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.541976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.542942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.542961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.543003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.543022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.543065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.543087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.543115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.816 [2024-07-13 07:13:47.543134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.816 [2024-07-13 07:13:47.543162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.543943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.543962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.544636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.544657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.545744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.545782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.545819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.545841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.545869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.545889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.545916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.545936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.545967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.545987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.546014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.546049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.546080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.817 [2024-07-13 07:13:47.546112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.817 [2024-07-13 07:13:47.546140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.546977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.546996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.547950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.547986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.548016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.548035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.548063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.548083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.548111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.548130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.548158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.548178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.548205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.548225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.818 [2024-07-13 07:13:47.548253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.818 [2024-07-13 07:13:47.548273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.819 [2024-07-13 07:13:47.548320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.548964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.548993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.549013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.550960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.550981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.819 [2024-07-13 07:13:47.551383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.819 [2024-07-13 07:13:47.551401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.551984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.551998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.552705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.552720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.553515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.553582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.553635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.553673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.553709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.820 [2024-07-13 07:13:47.553744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.820 [2024-07-13 07:13:47.553774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.553788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.553809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.553823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.553844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.553858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.553893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.553914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.553959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.553978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.553991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.554968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.554999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.555034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.555047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.555065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.555078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.821 [2024-07-13 07:13:47.555097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.821 [2024-07-13 07:13:47.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.822 [2024-07-13 07:13:47.555405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.555825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.555847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.556982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.556995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.822 [2024-07-13 07:13:47.557348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.822 [2024-07-13 07:13:47.557361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.557974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.557989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.823 [2024-07-13 07:13:47.558932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.823 [2024-07-13 07:13:47.558967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.558996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.559751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.559777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.559803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.559819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.559839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.559854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.559875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.559890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.559925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.559954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.559988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.560977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.560991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.824 [2024-07-13 07:13:47.561242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.824 [2024-07-13 07:13:47.561255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.825 [2024-07-13 07:13:47.561652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.561957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.561986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.562005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.562018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.562036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.562050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.562070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.562084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.562833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.562860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.562886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.562902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.562924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.562967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.825 [2024-07-13 07:13:47.563316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.825 [2024-07-13 07:13:47.563334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.563977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.563991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.826 [2024-07-13 07:13:47.564825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.826 [2024-07-13 07:13:47.564846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.564860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.564888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.564905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.564940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.564969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.564988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.565020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.565052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.565089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.565121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.565153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.565968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.565992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.566961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.566996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.567009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.567027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.567040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.567067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.567082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.567101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.567115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.567133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.567147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.567165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.827 [2024-07-13 07:13:47.567179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.827 [2024-07-13 07:13:47.567197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.567899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-07-13 07:13:47.567948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.567967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.568313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.568327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.828 [2024-07-13 07:13:47.569560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.828 [2024-07-13 07:13:47.569596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.569958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.569993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.570941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.570969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.571003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.571016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.571035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.829 [2024-07-13 07:13:47.571048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.829 [2024-07-13 07:13:47.571067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.571623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.571651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.572952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.572987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.830 [2024-07-13 07:13:47.573577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.830 [2024-07-13 07:13:47.573598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.573957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.573986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.831 [2024-07-13 07:13:47.574694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.574956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.574991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.831 [2024-07-13 07:13:47.575909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:13.831 [2024-07-13 07:13:47.575928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.575942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.575960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.575974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.575993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.576944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.576989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.832 [2024-07-13 07:13:47.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.832 [2024-07-13 07:13:47.577359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.577926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.577939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.578960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.833 [2024-07-13 07:13:47.578989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:13.833 [2024-07-13 07:13:47.579012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.579963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.579986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.834 [2024-07-13 07:13:47.580449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.834 [2024-07-13 07:13:47.580672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:13.834 [2024-07-13 07:13:47.580696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.835 [2024-07-13 07:13:47.580710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:13:47.580734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.835 [2024-07-13 07:13:47.580749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:13:47.580773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.835 [2024-07-13 07:13:47.580787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:13:47.580959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.835 [2024-07-13 07:13:47.580996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.743979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.743993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.835 [2024-07-13 07:14:00.744284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.835 [2024-07-13 07:14:00.744299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.744971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.744986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.745000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.836 [2024-07-13 07:14:00.745028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.836 [2024-07-13 07:14:00.745582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.836 [2024-07-13 07:14:00.745598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.745979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.745995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.837 [2024-07-13 07:14:00.746320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.837 [2024-07-13 07:14:00.746749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.837 [2024-07-13 07:14:00.746762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.838 [2024-07-13 07:14:00.746789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.838 [2024-07-13 07:14:00.746817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.838 [2024-07-13 07:14:00.746850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.838 [2024-07-13 07:14:00.746878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.838 [2024-07-13 07:14:00.746906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.838 [2024-07-13 07:14:00.746940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.746955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821000 is same with the state(5) to be set 00:28:13.838 [2024-07-13 07:14:00.746972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:13.838 [2024-07-13 07:14:00.746983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:13.838 [2024-07-13 07:14:00.746994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42672 len:8 PRP1 0x0 PRP2 0x0 00:28:13.838 [2024-07-13 07:14:00.747008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.747069] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x821000 was disconnected and freed. reset controller. 00:28:13.838 [2024-07-13 07:14:00.747170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.838 [2024-07-13 07:14:00.747194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.747209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.838 [2024-07-13 07:14:00.747223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.747247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.838 [2024-07-13 07:14:00.747261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.747275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.838 [2024-07-13 07:14:00.747287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.838 [2024-07-13 07:14:00.747300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8032a0 is same with the state(5) to be set 00:28:13.838 [2024-07-13 07:14:00.748693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.838 [2024-07-13 07:14:00.748734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8032a0 (9): Bad file descriptor 00:28:13.838 [2024-07-13 07:14:00.748845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.838 [2024-07-13 07:14:00.748874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8032a0 with addr=10.0.0.2, port=4421 00:28:13.838 [2024-07-13 07:14:00.748898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8032a0 is same with the state(5) to be set 00:28:13.838 [2024-07-13 07:14:00.748922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8032a0 (9): Bad file descriptor 00:28:13.838 [2024-07-13 07:14:00.748944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:13.838 [2024-07-13 07:14:00.748958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:13.838 [2024-07-13 07:14:00.748973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.838 [2024-07-13 07:14:00.748999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.838 [2024-07-13 07:14:00.749013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.838 [2024-07-13 07:14:10.845434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:13.838 Received shutdown signal, test time was about 55.146580 seconds 00:28:13.838 00:28:13.838 Latency(us) 00:28:13.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.838 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:13.838 Verification LBA range: start 0x0 length 0x4000 00:28:13.838 Nvme0n1 : 55.15 8396.22 32.80 0.00 0.00 15216.46 1511.80 7107438.78 00:28:13.838 =================================================================================================================== 00:28:13.838 Total : 8396.22 32.80 0.00 0.00 15216.46 1511.80 7107438.78 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.838 rmmod nvme_tcp 00:28:13.838 rmmod nvme_fabrics 00:28:13.838 rmmod nvme_keyring 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113617 ']' 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113617 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 113617 ']' 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 113617 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113617 00:28:13.838 killing process with pid 113617 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113617' 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 113617 00:28:13.838 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 113617 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:14.098 00:28:14.098 real 1m1.285s 00:28:14.098 user 2m51.408s 00:28:14.098 sys 0m14.952s 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.098 ************************************ 00:28:14.098 END TEST nvmf_host_multipath 00:28:14.098 07:14:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:14.098 ************************************ 00:28:14.098 07:14:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:14.098 07:14:21 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:14.098 07:14:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:14.098 07:14:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.098 07:14:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.098 ************************************ 00:28:14.098 START TEST nvmf_timeout 00:28:14.098 ************************************ 00:28:14.098 07:14:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:14.098 * Looking for test storage... 00:28:14.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:14.098 Cannot find device "nvmf_tgt_br" 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:14.098 Cannot find device "nvmf_tgt_br2" 00:28:14.098 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:14.099 Cannot find device "nvmf_tgt_br" 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:14.099 Cannot find device "nvmf_tgt_br2" 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:14.099 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:14.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:14.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:14.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:28:14.358 00:28:14.358 --- 10.0.0.2 ping statistics --- 00:28:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.358 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:14.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:14.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:28:14.358 00:28:14.358 --- 10.0.0.3 ping statistics --- 00:28:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.358 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:14.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:28:14.358 00:28:14.358 --- 10.0.0.1 ping statistics --- 00:28:14.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.358 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=114966 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 114966 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 114966 ']' 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.358 07:14:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.358 [2024-07-13 07:14:22.419289] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:14.358 [2024-07-13 07:14:22.419375] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.617 [2024-07-13 07:14:22.553176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:14.617 [2024-07-13 07:14:22.664059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.617 [2024-07-13 07:14:22.664123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.617 [2024-07-13 07:14:22.664133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.617 [2024-07-13 07:14:22.664141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.617 [2024-07-13 07:14:22.664147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.617 [2024-07-13 07:14:22.664310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.617 [2024-07-13 07:14:22.664321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.551 07:14:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.551 07:14:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:15.551 07:14:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.551 07:14:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.551 07:14:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:15.552 07:14:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.552 07:14:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.552 07:14:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:15.552 [2024-07-13 07:14:23.624003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.811 07:14:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:16.070 Malloc0 00:28:16.070 07:14:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.329 07:14:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.589 [2024-07-13 07:14:24.603608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=115056 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 115056 /var/tmp/bdevperf.sock 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115056 ']' 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.589 07:14:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.848 [2024-07-13 07:14:24.669906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:16.848 [2024-07-13 07:14:24.670019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115056 ] 00:28:16.848 [2024-07-13 07:14:24.809774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.848 [2024-07-13 07:14:24.909667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.784 07:14:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.784 07:14:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:17.784 07:14:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:18.043 07:14:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:18.302 NVMe0n1 00:28:18.302 07:14:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=115099 00:28:18.302 07:14:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:18.302 07:14:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:18.561 Running I/O for 10 seconds... 00:28:19.500 07:14:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.500 [2024-07-13 07:14:27.510289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.510763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2297510 is same with the state(5) to be set 00:28:19.500 [2024-07-13 07:14:27.512208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.501 [2024-07-13 07:14:27.512612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.512989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.512999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.513007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.513017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.513026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.513035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.513044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.513053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.501 [2024-07-13 07:14:27.513062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.501 [2024-07-13 07:14:27.513072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.502 [2024-07-13 07:14:27.513939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.502 [2024-07-13 07:14:27.513962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.502 [2024-07-13 07:14:27.513973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.513981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.513992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.503 [2024-07-13 07:14:27.514842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.503 [2024-07-13 07:14:27.514867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.503 [2024-07-13 07:14:27.514876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.503 [2024-07-13 07:14:27.514884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:28:19.504 [2024-07-13 07:14:27.514899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.504 [2024-07-13 07:14:27.514952] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2303d20 was disconnected and freed. reset controller. 00:28:19.504 [2024-07-13 07:14:27.515202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.504 [2024-07-13 07:14:27.515271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3690 (9): Bad file descriptor 00:28:19.504 [2024-07-13 07:14:27.515362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.504 [2024-07-13 07:14:27.515382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e3690 with addr=10.0.0.2, port=4420 00:28:19.504 [2024-07-13 07:14:27.515393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3690 is same with the state(5) to be set 00:28:19.504 [2024-07-13 07:14:27.515416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3690 (9): Bad file descriptor 00:28:19.504 [2024-07-13 07:14:27.515432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.504 [2024-07-13 07:14:27.515441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.504 [2024-07-13 07:14:27.515450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.504 [2024-07-13 07:14:27.515468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.504 [2024-07-13 07:14:27.515479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.504 07:14:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:22.039 [2024-07-13 07:14:29.515728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:22.039 [2024-07-13 07:14:29.515807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e3690 with addr=10.0.0.2, port=4420 00:28:22.039 [2024-07-13 07:14:29.515823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3690 is same with the state(5) to be set 00:28:22.039 [2024-07-13 07:14:29.515850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3690 (9): Bad file descriptor 00:28:22.039 [2024-07-13 07:14:29.515880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:22.039 [2024-07-13 07:14:29.515892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:22.039 [2024-07-13 07:14:29.515903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.039 [2024-07-13 07:14:29.515931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:22.039 [2024-07-13 07:14:29.515942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:22.039 07:14:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:22.039 07:14:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:22.039 07:14:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:23.971 [2024-07-13 07:14:31.516112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.971 [2024-07-13 07:14:31.516179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e3690 with addr=10.0.0.2, port=4420 00:28:23.971 [2024-07-13 07:14:31.516195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3690 is same with the state(5) to be set 00:28:23.971 [2024-07-13 07:14:31.516223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3690 (9): Bad file descriptor 00:28:23.971 [2024-07-13 07:14:31.516242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.971 [2024-07-13 07:14:31.516251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.971 [2024-07-13 07:14:31.516262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.971 [2024-07-13 07:14:31.516288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.971 [2024-07-13 07:14:31.516299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.871 [2024-07-13 07:14:33.516401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.871 [2024-07-13 07:14:33.516463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.871 [2024-07-13 07:14:33.516475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.871 [2024-07-13 07:14:33.516484] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:25.871 [2024-07-13 07:14:33.516510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.804 00:28:26.804 Latency(us) 00:28:26.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.804 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:26.804 Verification LBA range: start 0x0 length 0x4000 00:28:26.804 NVMe0n1 : 8.13 1206.21 4.71 15.74 0.00 104573.80 2025.66 7015926.69 00:28:26.804 =================================================================================================================== 00:28:26.804 Total : 1206.21 4.71 15.74 0.00 104573.80 2025.66 7015926.69 00:28:26.804 0 00:28:27.063 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:27.063 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:27.063 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:27.321 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:27.321 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:27.321 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:27.321 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 115099 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 115056 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115056 ']' 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115056 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115056 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:27.886 killing process with pid 115056 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115056' 00:28:27.886 Received shutdown signal, test time was about 9.302465 seconds 00:28:27.886 00:28:27.886 Latency(us) 00:28:27.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.886 =================================================================================================================== 00:28:27.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115056 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115056 00:28:27.886 07:14:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.144 [2024-07-13 07:14:36.133201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=115252 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 115252 /var/tmp/bdevperf.sock 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115252 ']' 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.144 07:14:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:28.144 [2024-07-13 07:14:36.212767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:28.144 [2024-07-13 07:14:36.212868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115252 ] 00:28:28.402 [2024-07-13 07:14:36.356111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.402 [2024-07-13 07:14:36.444231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.332 07:14:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.332 07:14:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:29.332 07:14:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:29.332 07:14:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:29.898 NVMe0n1 00:28:29.898 07:14:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=115300 00:28:29.899 07:14:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:29.899 07:14:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:29.899 Running I/O for 10 seconds... 00:28:30.834 07:14:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.095 [2024-07-13 07:14:38.984469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.984994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.095 [2024-07-13 07:14:38.985049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438bb0 is same with the state(5) to be set 00:28:31.096 [2024-07-13 07:14:38.985673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.985976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.985983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.096 [2024-07-13 07:14:38.986343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.096 [2024-07-13 07:14:38.986352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.097 [2024-07-13 07:14:38.986843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.986986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.986994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.987004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.987018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.987028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.987040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.987049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.987057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.987067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.987075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.097 [2024-07-13 07:14:38.987085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.097 [2024-07-13 07:14:38.987093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.098 [2024-07-13 07:14:38.987801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.098 [2024-07-13 07:14:38.987809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.987986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.987995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.099 [2024-07-13 07:14:38.988244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:31.099 [2024-07-13 07:14:38.988279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:31.099 [2024-07-13 07:14:38.988302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85224 len:8 PRP1 0x0 PRP2 0x0 00:28:31.099 [2024-07-13 07:14:38.988317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.099 [2024-07-13 07:14:38.988395] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2136d20 was disconnected and freed. reset controller. 00:28:31.099 [2024-07-13 07:14:38.988630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.099 [2024-07-13 07:14:38.988724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:31.099 [2024-07-13 07:14:38.988841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.099 [2024-07-13 07:14:38.988861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116690 with addr=10.0.0.2, port=4420 00:28:31.099 [2024-07-13 07:14:38.988871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116690 is same with the state(5) to be set 00:28:31.099 [2024-07-13 07:14:38.988887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:31.099 [2024-07-13 07:14:38.988908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.099 [2024-07-13 07:14:38.988917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.099 [2024-07-13 07:14:38.988927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.099 [2024-07-13 07:14:38.988954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.099 [2024-07-13 07:14:38.988975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.099 07:14:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:32.036 [2024-07-13 07:14:39.989173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.036 [2024-07-13 07:14:39.989292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116690 with addr=10.0.0.2, port=4420 00:28:32.036 [2024-07-13 07:14:39.989309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116690 is same with the state(5) to be set 00:28:32.036 [2024-07-13 07:14:39.989341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:32.036 [2024-07-13 07:14:39.989360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.036 [2024-07-13 07:14:39.989370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.036 [2024-07-13 07:14:39.989383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.036 [2024-07-13 07:14:39.989415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.036 [2024-07-13 07:14:39.989426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.036 07:14:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.295 [2024-07-13 07:14:40.250360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.295 07:14:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 115300 00:28:33.231 [2024-07-13 07:14:41.001009] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:39.790 00:28:39.790 Latency(us) 00:28:39.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.790 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:39.791 Verification LBA range: start 0x0 length 0x4000 00:28:39.791 NVMe0n1 : 10.01 6928.62 27.06 0.00 0.00 18435.91 1794.79 3035150.89 00:28:39.791 =================================================================================================================== 00:28:39.791 Total : 6928.62 27.06 0.00 0.00 18435.91 1794.79 3035150.89 00:28:39.791 0 00:28:39.791 07:14:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=115417 00:28:39.791 07:14:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:39.791 07:14:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:40.048 Running I/O for 10 seconds... 00:28:40.981 07:14:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.242 [2024-07-13 07:14:49.123866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.242 [2024-07-13 07:14:49.123955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.242 [2024-07-13 07:14:49.123982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.242 [2024-07-13 07:14:49.123991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.123999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.124498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228f790 is same with the state(5) to be set 00:28:41.243 [2024-07-13 07:14:49.125004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.243 [2024-07-13 07:14:49.125076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.243 [2024-07-13 07:14:49.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.243 [2024-07-13 07:14:49.125110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.243 [2024-07-13 07:14:49.125121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.243 [2024-07-13 07:14:49.125130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.243 [2024-07-13 07:14:49.125140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.244 [2024-07-13 07:14:49.125723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.244 [2024-07-13 07:14:49.125731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.125987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.125996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.126004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.126022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.245 [2024-07-13 07:14:49.126190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.245 [2024-07-13 07:14:49.126278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.245 [2024-07-13 07:14:49.126287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.246 [2024-07-13 07:14:49.126856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.246 [2024-07-13 07:14:49.126864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.126986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.126994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.247 [2024-07-13 07:14:49.127457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:41.247 [2024-07-13 07:14:49.127499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:41.247 [2024-07-13 07:14:49.127508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82288 len:8 PRP1 0x0 PRP2 0x0 00:28:41.247 [2024-07-13 07:14:49.127517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.247 [2024-07-13 07:14:49.127600] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21396a0 was disconnected and freed. reset controller. 00:28:41.247 [2024-07-13 07:14:49.127802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.247 [2024-07-13 07:14:49.127889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:41.248 [2024-07-13 07:14:49.128005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-07-13 07:14:49.128024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116690 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-07-13 07:14:49.128034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116690 is same with the state(5) to be set 00:28:41.248 [2024-07-13 07:14:49.128050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:41.248 [2024-07-13 07:14:49.128064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-07-13 07:14:49.128073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-07-13 07:14:49.128084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-07-13 07:14:49.128103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-07-13 07:14:49.128113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 07:14:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:42.182 [2024-07-13 07:14:50.128290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.182 [2024-07-13 07:14:50.128398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116690 with addr=10.0.0.2, port=4420 00:28:42.182 [2024-07-13 07:14:50.128415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116690 is same with the state(5) to be set 00:28:42.182 [2024-07-13 07:14:50.128447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:42.182 [2024-07-13 07:14:50.128467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.182 [2024-07-13 07:14:50.128478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.182 [2024-07-13 07:14:50.128489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.182 [2024-07-13 07:14:50.128521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.182 [2024-07-13 07:14:50.128533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.137 [2024-07-13 07:14:51.128752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.137 [2024-07-13 07:14:51.128869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116690 with addr=10.0.0.2, port=4420 00:28:43.137 [2024-07-13 07:14:51.128886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116690 is same with the state(5) to be set 00:28:43.137 [2024-07-13 07:14:51.128918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:43.137 [2024-07-13 07:14:51.128937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.137 [2024-07-13 07:14:51.128959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.137 [2024-07-13 07:14:51.128970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.137 [2024-07-13 07:14:51.129001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.137 [2024-07-13 07:14:51.129013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.119 [2024-07-13 07:14:52.132121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.119 [2024-07-13 07:14:52.132233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2116690 with addr=10.0.0.2, port=4420 00:28:44.119 [2024-07-13 07:14:52.132249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2116690 is same with the state(5) to be set 00:28:44.119 [2024-07-13 07:14:52.132486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116690 (9): Bad file descriptor 00:28:44.119 [2024-07-13 07:14:52.132726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.119 [2024-07-13 07:14:52.132742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.119 [2024-07-13 07:14:52.132754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.119 [2024-07-13 07:14:52.136029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.119 [2024-07-13 07:14:52.136070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.119 07:14:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.376 [2024-07-13 07:14:52.343903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.376 07:14:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 115417 00:28:45.309 [2024-07-13 07:14:53.174263] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:50.573 00:28:50.573 Latency(us) 00:28:50.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.573 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:50.573 Verification LBA range: start 0x0 length 0x4000 00:28:50.573 NVMe0n1 : 10.01 5496.01 21.47 4334.41 0.00 12996.28 592.06 3019898.88 00:28:50.573 =================================================================================================================== 00:28:50.573 Total : 5496.01 21.47 4334.41 0.00 12996.28 0.00 3019898.88 00:28:50.573 0 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 115252 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115252 ']' 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115252 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115252 00:28:50.573 killing process with pid 115252 00:28:50.573 Received shutdown signal, test time was about 10.000000 seconds 00:28:50.573 00:28:50.573 Latency(us) 00:28:50.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.573 =================================================================================================================== 00:28:50.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115252' 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115252 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115252 00:28:50.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115538 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115538 /var/tmp/bdevperf.sock 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 115538 ']' 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.573 07:14:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 [2024-07-13 07:14:58.380745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:50.573 [2024-07-13 07:14:58.381197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115538 ] 00:28:50.573 [2024-07-13 07:14:58.521098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.573 [2024-07-13 07:14:58.600066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.508 07:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.508 07:14:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:51.508 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115538 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:51.508 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115565 00:28:51.508 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:51.766 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:52.024 NVMe0n1 00:28:52.024 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:52.024 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=115614 00:28:52.024 07:14:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:28:52.024 Running I/O for 10 seconds... 00:28:52.959 07:15:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.219 [2024-07-13 07:15:01.167856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.167945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.167956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.167971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.167979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.167986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.167994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.219 [2024-07-13 07:15:01.168117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2292f30 is same with the state(5) to be set 00:28:53.220 [2024-07-13 07:15:01.168545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.168982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.168993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.220 [2024-07-13 07:15:01.169334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.220 [2024-07-13 07:15:01.169343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.169986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.169997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.221 [2024-07-13 07:15:01.170202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.221 [2024-07-13 07:15:01.170213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.170979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.170991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.222 [2024-07-13 07:15:01.171126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.222 [2024-07-13 07:15:01.171136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.223 [2024-07-13 07:15:01.171145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.223 [2024-07-13 07:15:01.171155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.223 [2024-07-13 07:15:01.171164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.223 [2024-07-13 07:15:01.171174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.223 [2024-07-13 07:15:01.171183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.223 [2024-07-13 07:15:01.171194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.223 [2024-07-13 07:15:01.171203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.223 [2024-07-13 07:15:01.171214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.223 [2024-07-13 07:15:01.171223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.223 [2024-07-13 07:15:01.171233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1d20 is same with the state(5) to be set 00:28:53.223 [2024-07-13 07:15:01.171245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:53.223 [2024-07-13 07:15:01.171252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:53.223 [2024-07-13 07:15:01.171265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102960 len:8 PRP1 0x0 PRP2 0x0 00:28:53.223 [2024-07-13 07:15:01.171274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.223 [2024-07-13 07:15:01.171336] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcd1d20 was disconnected and freed. reset controller. 00:28:53.223 [2024-07-13 07:15:01.171619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:53.223 [2024-07-13 07:15:01.171704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1690 (9): Bad file descriptor 00:28:53.223 [2024-07-13 07:15:01.171822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.223 [2024-07-13 07:15:01.171843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb1690 with addr=10.0.0.2, port=4420 00:28:53.223 [2024-07-13 07:15:01.171854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1690 is same with the state(5) to be set 00:28:53.223 [2024-07-13 07:15:01.171870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1690 (9): Bad file descriptor 00:28:53.223 [2024-07-13 07:15:01.171886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:53.223 [2024-07-13 07:15:01.171895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:53.223 [2024-07-13 07:15:01.171906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:53.223 [2024-07-13 07:15:01.171928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.223 [2024-07-13 07:15:01.171939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:53.223 07:15:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 115614 00:28:55.124 [2024-07-13 07:15:03.172346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.124 [2024-07-13 07:15:03.172431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb1690 with addr=10.0.0.2, port=4420 00:28:55.124 [2024-07-13 07:15:03.172463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1690 is same with the state(5) to be set 00:28:55.124 [2024-07-13 07:15:03.172507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1690 (9): Bad file descriptor 00:28:55.124 [2024-07-13 07:15:03.172529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.124 [2024-07-13 07:15:03.172539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.124 [2024-07-13 07:15:03.172550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.124 [2024-07-13 07:15:03.172608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.124 [2024-07-13 07:15:03.172622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.652 [2024-07-13 07:15:05.172844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.652 [2024-07-13 07:15:05.172952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb1690 with addr=10.0.0.2, port=4420 00:28:57.652 [2024-07-13 07:15:05.172968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1690 is same with the state(5) to be set 00:28:57.652 [2024-07-13 07:15:05.173010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb1690 (9): Bad file descriptor 00:28:57.652 [2024-07-13 07:15:05.173032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.652 [2024-07-13 07:15:05.173042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.652 [2024-07-13 07:15:05.173055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.652 [2024-07-13 07:15:05.173085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.652 [2024-07-13 07:15:05.173097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.550 [2024-07-13 07:15:07.173170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.550 [2024-07-13 07:15:07.173256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.550 [2024-07-13 07:15:07.173281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.550 [2024-07-13 07:15:07.173292] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:59.550 [2024-07-13 07:15:07.173323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.117 00:29:00.117 Latency(us) 00:29:00.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.117 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:00.117 NVMe0n1 : 8.17 2968.10 11.59 15.66 0.00 42858.12 2949.12 7015926.69 00:29:00.117 =================================================================================================================== 00:29:00.117 Total : 2968.10 11.59 15.66 0.00 42858.12 2949.12 7015926.69 00:29:00.117 0 00:29:00.375 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:00.375 Attaching 5 probes... 00:29:00.376 1290.700922: reset bdev controller NVMe0 00:29:00.376 1290.831870: reconnect bdev controller NVMe0 00:29:00.376 3291.207110: reconnect delay bdev controller NVMe0 00:29:00.376 3291.262372: reconnect bdev controller NVMe0 00:29:00.376 5291.745492: reconnect delay bdev controller NVMe0 00:29:00.376 5291.782196: reconnect bdev controller NVMe0 00:29:00.376 7292.217134: reconnect delay bdev controller NVMe0 00:29:00.376 7292.240682: reconnect bdev controller NVMe0 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115565 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115538 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 115538 ']' 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 115538 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115538 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:00.376 killing process with pid 115538 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115538' 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 115538 00:29:00.376 Received shutdown signal, test time was about 8.239454 seconds 00:29:00.376 00:29:00.376 Latency(us) 00:29:00.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.376 =================================================================================================================== 00:29:00.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.376 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 115538 00:29:00.634 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:00.893 rmmod nvme_tcp 00:29:00.893 rmmod nvme_fabrics 00:29:00.893 rmmod nvme_keyring 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 114966 ']' 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 114966 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 114966 ']' 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 114966 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114966 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:00.893 killing process with pid 114966 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114966' 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 114966 00:29:00.893 07:15:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 114966 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:01.152 00:29:01.152 real 0m47.227s 00:29:01.152 user 2m19.120s 00:29:01.152 sys 0m5.111s 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.152 07:15:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.152 ************************************ 00:29:01.152 END TEST nvmf_timeout 00:29:01.152 ************************************ 00:29:01.411 07:15:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:01.411 07:15:09 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:29:01.411 07:15:09 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:01.411 07:15:09 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.411 07:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.411 07:15:09 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:01.411 00:29:01.411 real 21m50.764s 00:29:01.411 user 65m6.796s 00:29:01.411 sys 4m30.096s 00:29:01.411 07:15:09 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.411 07:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.411 ************************************ 00:29:01.411 END TEST nvmf_tcp 00:29:01.411 ************************************ 00:29:01.411 07:15:09 -- common/autotest_common.sh@1142 -- # return 0 00:29:01.411 07:15:09 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:01.411 07:15:09 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:01.411 07:15:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:01.411 07:15:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.411 07:15:09 -- common/autotest_common.sh@10 -- # set +x 00:29:01.411 ************************************ 00:29:01.411 START TEST spdkcli_nvmf_tcp 00:29:01.411 ************************************ 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:01.411 * Looking for test storage... 00:29:01.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:01.411 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=115831 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 115831 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 115831 ']' 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.412 07:15:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:01.670 [2024-07-13 07:15:09.504486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:01.670 [2024-07-13 07:15:09.504642] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115831 ] 00:29:01.670 [2024-07-13 07:15:09.637959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.670 [2024-07-13 07:15:09.722337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.670 [2024-07-13 07:15:09.722348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:02.605 07:15:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:02.606 07:15:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:02.606 07:15:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:02.606 07:15:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.606 07:15:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:02.606 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:02.606 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:02.606 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:02.606 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:02.606 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:02.606 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:02.606 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:02.606 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:02.606 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:02.606 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:02.606 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:02.606 ' 00:29:05.133 [2024-07-13 07:15:13.104617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.539 [2024-07-13 07:15:14.374382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:09.067 [2024-07-13 07:15:16.729085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:10.965 [2024-07-13 07:15:18.755248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:12.383 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:12.383 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:12.383 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:12.383 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:12.383 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:12.383 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:12.383 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:12.383 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:12.383 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:12.383 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:12.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:12.383 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:12.383 07:15:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:12.383 07:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:12.383 07:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.640 07:15:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:12.640 07:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.640 07:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.640 07:15:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:12.640 07:15:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:12.898 07:15:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:12.898 07:15:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:12.898 07:15:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:12.898 07:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:12.898 07:15:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:13.156 07:15:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:13.156 07:15:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:13.156 07:15:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:13.156 07:15:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:13.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:13.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:13.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:13.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:13.156 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:13.156 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:13.156 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:13.156 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:13.156 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:13.156 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:13.156 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:13.156 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:13.156 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:13.156 ' 00:29:18.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:18.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:18.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:18.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:18.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:18.420 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:18.420 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:18.420 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:18.420 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:18.420 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:18.420 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:18.420 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:18.420 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:18.420 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 115831 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 115831 ']' 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 115831 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115831 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:18.420 killing process with pid 115831 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115831' 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 115831 00:29:18.420 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 115831 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 115831 ']' 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 115831 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 115831 ']' 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 115831 00:29:18.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (115831) - No such process 00:29:18.678 Process with pid 115831 is not found 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 115831 is not found' 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:18.678 00:29:18.678 real 0m17.409s 00:29:18.678 user 0m37.447s 00:29:18.678 sys 0m1.045s 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:18.678 07:15:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.678 ************************************ 00:29:18.678 END TEST spdkcli_nvmf_tcp 00:29:18.678 ************************************ 00:29:18.938 07:15:26 -- common/autotest_common.sh@1142 -- # return 0 00:29:18.938 07:15:26 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:18.938 07:15:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:18.938 07:15:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.938 07:15:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.938 ************************************ 00:29:18.938 START TEST nvmf_identify_passthru 00:29:18.938 ************************************ 00:29:18.938 07:15:26 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:18.938 * Looking for test storage... 00:29:18.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:18.938 07:15:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.938 07:15:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.938 07:15:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.938 07:15:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.938 07:15:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.938 07:15:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.938 07:15:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.938 07:15:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:18.938 07:15:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.938 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.938 07:15:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.938 07:15:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.938 07:15:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.938 07:15:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.939 07:15:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.939 07:15:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.939 07:15:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.939 07:15:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:18.939 07:15:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.939 07:15:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.939 07:15:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:18.939 07:15:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:18.939 Cannot find device "nvmf_tgt_br" 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.939 Cannot find device "nvmf_tgt_br2" 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:18.939 Cannot find device "nvmf_tgt_br" 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:18.939 Cannot find device "nvmf_tgt_br2" 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:29:18.939 07:15:26 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:19.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:19.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:19.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:29:19.198 00:29:19.198 --- 10.0.0.2 ping statistics --- 00:29:19.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.198 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:19.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:19.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:29:19.198 00:29:19.198 --- 10.0.0.3 ping statistics --- 00:29:19.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.198 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:19.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:29:19.198 00:29:19.198 --- 10.0.0.1 ping statistics --- 00:29:19.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.198 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.198 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.199 07:15:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:19.457 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:19.457 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:19.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=116318 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 116318 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 116318 ']' 00:29:19.716 07:15:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.716 07:15:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:19.975 [2024-07-13 07:15:27.826229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:19.975 [2024-07-13 07:15:27.826354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.975 [2024-07-13 07:15:27.964796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.233 [2024-07-13 07:15:28.064893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.233 [2024-07-13 07:15:28.064952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.233 [2024-07-13 07:15:28.064962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.233 [2024-07-13 07:15:28.064970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.233 [2024-07-13 07:15:28.064976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.233 [2024-07-13 07:15:28.065342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.233 [2024-07-13 07:15:28.065588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.233 [2024-07-13 07:15:28.065661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.234 [2024-07-13 07:15:28.065673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:20.801 07:15:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.801 07:15:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.801 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 [2024-07-13 07:15:28.966473] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:21.060 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.060 07:15:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.060 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.060 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 [2024-07-13 07:15:28.980793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.060 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.060 07:15:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:21.060 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:21.060 07:15:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 Nvme0n1 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.060 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.060 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.060 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.060 [2024-07-13 07:15:29.127957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.060 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.060 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.319 [ 00:29:21.319 { 00:29:21.319 "allow_any_host": true, 00:29:21.319 "hosts": [], 00:29:21.319 "listen_addresses": [], 00:29:21.319 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:21.319 "subtype": "Discovery" 00:29:21.319 }, 00:29:21.319 { 00:29:21.319 "allow_any_host": true, 00:29:21.319 "hosts": [], 00:29:21.319 "listen_addresses": [ 00:29:21.319 { 00:29:21.319 "adrfam": "IPv4", 00:29:21.319 "traddr": "10.0.0.2", 00:29:21.319 "trsvcid": "4420", 00:29:21.319 "trtype": "TCP" 00:29:21.319 } 00:29:21.319 ], 00:29:21.319 "max_cntlid": 65519, 00:29:21.319 "max_namespaces": 1, 00:29:21.319 "min_cntlid": 1, 00:29:21.319 "model_number": "SPDK bdev Controller", 00:29:21.319 "namespaces": [ 00:29:21.319 { 00:29:21.319 "bdev_name": "Nvme0n1", 00:29:21.319 "name": "Nvme0n1", 00:29:21.319 "nguid": "BE88C6DFF30F4E1B948ADE82140E91C2", 00:29:21.319 "nsid": 1, 00:29:21.319 "uuid": "be88c6df-f30f-4e1b-948a-de82140e91c2" 00:29:21.319 } 00:29:21.319 ], 00:29:21.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.319 "serial_number": "SPDK00000000000001", 00:29:21.319 "subtype": "NVMe" 00:29:21.319 } 00:29:21.319 ] 00:29:21.319 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:21.319 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:21.578 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:21.578 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:21.578 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:21.578 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.578 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.578 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.578 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.578 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:21.578 07:15:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:21.578 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.578 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.837 rmmod nvme_tcp 00:29:21.837 rmmod nvme_fabrics 00:29:21.837 rmmod nvme_keyring 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 116318 ']' 00:29:21.837 07:15:29 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 116318 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 116318 ']' 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 116318 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116318 00:29:21.837 killing process with pid 116318 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116318' 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 116318 00:29:21.837 07:15:29 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 116318 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.096 07:15:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:22.096 07:15:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.096 07:15:30 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:22.096 00:29:22.096 real 0m3.271s 00:29:22.096 user 0m8.214s 00:29:22.096 sys 0m0.863s 00:29:22.096 07:15:30 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.096 ************************************ 00:29:22.096 END TEST nvmf_identify_passthru 00:29:22.096 07:15:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:22.096 ************************************ 00:29:22.096 07:15:30 -- common/autotest_common.sh@1142 -- # return 0 00:29:22.096 07:15:30 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:22.096 07:15:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:22.096 07:15:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.096 07:15:30 -- common/autotest_common.sh@10 -- # set +x 00:29:22.096 ************************************ 00:29:22.096 START TEST nvmf_dif 00:29:22.096 ************************************ 00:29:22.096 07:15:30 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:22.355 * Looking for test storage... 00:29:22.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:22.355 07:15:30 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:22.355 07:15:30 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.355 07:15:30 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.355 07:15:30 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.355 07:15:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.355 07:15:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.355 07:15:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.355 07:15:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:22.355 07:15:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:22.355 07:15:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:22.355 07:15:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:22.355 07:15:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:22.355 07:15:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:22.355 07:15:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.355 07:15:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:22.355 07:15:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:22.355 Cannot find device "nvmf_tgt_br" 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:22.355 07:15:30 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:22.355 Cannot find device "nvmf_tgt_br2" 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:22.356 Cannot find device "nvmf_tgt_br" 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:22.356 Cannot find device "nvmf_tgt_br2" 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:22.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:22.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:22.356 07:15:30 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:22.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:29:22.615 00:29:22.615 --- 10.0.0.2 ping statistics --- 00:29:22.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.615 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:22.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:22.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:29:22.615 00:29:22.615 --- 10.0.0.3 ping statistics --- 00:29:22.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.615 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:22.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:29:22.615 00:29:22.615 --- 10.0.0.1 ping statistics --- 00:29:22.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.615 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:22.615 07:15:30 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:22.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:22.873 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:22.873 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:22.873 07:15:30 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.873 07:15:30 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:22.873 07:15:30 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:22.873 07:15:30 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.874 07:15:30 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:22.874 07:15:30 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:22.874 07:15:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:22.874 07:15:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:22.874 07:15:30 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:22.874 07:15:30 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=116661 00:29:22.874 07:15:30 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:22.874 07:15:30 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 116661 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 116661 ']' 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.874 07:15:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.133 [2024-07-13 07:15:30.979613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:23.133 [2024-07-13 07:15:30.979693] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.133 [2024-07-13 07:15:31.111438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.133 [2024-07-13 07:15:31.191382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.133 [2024-07-13 07:15:31.191442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.133 [2024-07-13 07:15:31.191464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.133 [2024-07-13 07:15:31.191472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.133 [2024-07-13 07:15:31.191478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.133 [2024-07-13 07:15:31.191509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:29:23.392 07:15:31 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 07:15:31 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.392 07:15:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:23.392 07:15:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 [2024-07-13 07:15:31.386183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.392 07:15:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 ************************************ 00:29:23.392 START TEST fio_dif_1_default 00:29:23.392 ************************************ 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 bdev_null0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:23.392 [2024-07-13 07:15:31.430307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.392 { 00:29:23.392 "params": { 00:29:23.392 "name": "Nvme$subsystem", 00:29:23.392 "trtype": "$TEST_TRANSPORT", 00:29:23.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.392 "adrfam": "ipv4", 00:29:23.392 "trsvcid": "$NVMF_PORT", 00:29:23.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.392 "hdgst": ${hdgst:-false}, 00:29:23.392 "ddgst": ${ddgst:-false} 00:29:23.392 }, 00:29:23.392 "method": "bdev_nvme_attach_controller" 00:29:23.392 } 00:29:23.392 EOF 00:29:23.392 )") 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:23.392 07:15:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.392 "params": { 00:29:23.392 "name": "Nvme0", 00:29:23.392 "trtype": "tcp", 00:29:23.392 "traddr": "10.0.0.2", 00:29:23.392 "adrfam": "ipv4", 00:29:23.392 "trsvcid": "4420", 00:29:23.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.392 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.392 "hdgst": false, 00:29:23.393 "ddgst": false 00:29:23.393 }, 00:29:23.393 "method": "bdev_nvme_attach_controller" 00:29:23.393 }' 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:23.651 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:23.652 07:15:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.652 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:23.652 fio-3.35 00:29:23.652 Starting 1 thread 00:29:35.852 00:29:35.852 filename0: (groupid=0, jobs=1): err= 0: pid=116732: Sat Jul 13 07:15:42 2024 00:29:35.852 read: IOPS=2089, BW=8358KiB/s (8558kB/s)(81.8MiB/10024msec) 00:29:35.852 slat (nsec): min=5835, max=51948, avg=7883.97, stdev=3422.15 00:29:35.852 clat (usec): min=354, max=41618, avg=1890.41, stdev=7622.71 00:29:35.852 lat (usec): min=363, max=41627, avg=1898.30, stdev=7622.76 00:29:35.852 clat percentiles (usec): 00:29:35.852 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 383], 00:29:35.852 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 408], 00:29:35.852 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 441], 95.00th=[ 469], 00:29:35.852 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:35.852 | 99.99th=[41681] 00:29:35.852 bw ( KiB/s): min= 3040, max=13344, per=100.00%, avg=8376.00, stdev=2825.63, samples=20 00:29:35.852 iops : min= 760, max= 3336, avg=2094.00, stdev=706.41, samples=20 00:29:35.852 lat (usec) : 500=96.02%, 750=0.27% 00:29:35.852 lat (msec) : 2=0.02%, 4=0.02%, 50=3.67% 00:29:35.852 cpu : usr=90.23%, sys=8.56%, ctx=19, majf=0, minf=9 00:29:35.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.853 issued rwts: total=20944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.853 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:35.853 00:29:35.853 Run status group 0 (all jobs): 00:29:35.853 READ: bw=8358KiB/s (8558kB/s), 8358KiB/s-8358KiB/s (8558kB/s-8558kB/s), io=81.8MiB (85.8MB), run=10024-10024msec 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 00:29:35.853 real 0m11.046s 00:29:35.853 user 0m9.685s 00:29:35.853 sys 0m1.136s 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 ************************************ 00:29:35.853 END TEST fio_dif_1_default 00:29:35.853 ************************************ 00:29:35.853 07:15:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:35.853 07:15:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:35.853 07:15:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.853 07:15:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 ************************************ 00:29:35.853 START TEST fio_dif_1_multi_subsystems 00:29:35.853 ************************************ 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 bdev_null0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 [2024-07-13 07:15:42.538397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 bdev_null1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.853 { 00:29:35.853 "params": { 00:29:35.853 "name": "Nvme$subsystem", 00:29:35.853 "trtype": "$TEST_TRANSPORT", 00:29:35.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.853 "adrfam": "ipv4", 00:29:35.853 "trsvcid": "$NVMF_PORT", 00:29:35.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.853 "hdgst": ${hdgst:-false}, 00:29:35.853 "ddgst": ${ddgst:-false} 00:29:35.853 }, 00:29:35.853 "method": "bdev_nvme_attach_controller" 00:29:35.853 } 00:29:35.853 EOF 00:29:35.853 )") 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:35.853 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.854 { 00:29:35.854 "params": { 00:29:35.854 "name": "Nvme$subsystem", 00:29:35.854 "trtype": "$TEST_TRANSPORT", 00:29:35.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.854 "adrfam": "ipv4", 00:29:35.854 "trsvcid": "$NVMF_PORT", 00:29:35.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.854 "hdgst": ${hdgst:-false}, 00:29:35.854 "ddgst": ${ddgst:-false} 00:29:35.854 }, 00:29:35.854 "method": "bdev_nvme_attach_controller" 00:29:35.854 } 00:29:35.854 EOF 00:29:35.854 )") 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.854 "params": { 00:29:35.854 "name": "Nvme0", 00:29:35.854 "trtype": "tcp", 00:29:35.854 "traddr": "10.0.0.2", 00:29:35.854 "adrfam": "ipv4", 00:29:35.854 "trsvcid": "4420", 00:29:35.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.854 "hdgst": false, 00:29:35.854 "ddgst": false 00:29:35.854 }, 00:29:35.854 "method": "bdev_nvme_attach_controller" 00:29:35.854 },{ 00:29:35.854 "params": { 00:29:35.854 "name": "Nvme1", 00:29:35.854 "trtype": "tcp", 00:29:35.854 "traddr": "10.0.0.2", 00:29:35.854 "adrfam": "ipv4", 00:29:35.854 "trsvcid": "4420", 00:29:35.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.854 "hdgst": false, 00:29:35.854 "ddgst": false 00:29:35.854 }, 00:29:35.854 "method": "bdev_nvme_attach_controller" 00:29:35.854 }' 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:35.854 07:15:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.854 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:35.854 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:35.854 fio-3.35 00:29:35.854 Starting 2 threads 00:29:45.825 00:29:45.825 filename0: (groupid=0, jobs=1): err= 0: pid=116886: Sat Jul 13 07:15:53 2024 00:29:45.825 read: IOPS=224, BW=897KiB/s (918kB/s)(8976KiB/10008msec) 00:29:45.825 slat (nsec): min=6220, max=83770, avg=9685.73, stdev=6881.39 00:29:45.825 clat (usec): min=374, max=42429, avg=17807.54, stdev=20023.28 00:29:45.825 lat (usec): min=380, max=42440, avg=17817.22, stdev=20023.25 00:29:45.825 clat percentiles (usec): 00:29:45.825 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 412], 00:29:45.825 | 30.00th=[ 424], 40.00th=[ 437], 50.00th=[ 465], 60.00th=[40633], 00:29:45.825 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:45.825 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:29:45.825 | 99.99th=[42206] 00:29:45.825 bw ( KiB/s): min= 576, max= 2016, per=53.24%, avg=894.32, stdev=351.91, samples=19 00:29:45.825 iops : min= 144, max= 504, avg=223.58, stdev=87.98, samples=19 00:29:45.825 lat (usec) : 500=54.06%, 750=2.63%, 1000=0.18% 00:29:45.825 lat (msec) : 2=0.18%, 50=42.96% 00:29:45.825 cpu : usr=96.67%, sys=2.88%, ctx=7, majf=0, minf=0 00:29:45.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.825 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:45.825 filename1: (groupid=0, jobs=1): err= 0: pid=116887: Sat Jul 13 07:15:53 2024 00:29:45.825 read: IOPS=196, BW=785KiB/s (803kB/s)(7872KiB/10034msec) 00:29:45.825 slat (nsec): min=6262, max=74720, avg=10997.27, stdev=9156.15 00:29:45.825 clat (usec): min=375, max=42536, avg=20355.96, stdev=20241.66 00:29:45.825 lat (usec): min=382, max=42549, avg=20366.96, stdev=20242.28 00:29:45.825 clat percentiles (usec): 00:29:45.825 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 424], 00:29:45.825 | 30.00th=[ 441], 40.00th=[ 461], 50.00th=[ 685], 60.00th=[40633], 00:29:45.825 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:45.825 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:45.825 | 99.99th=[42730] 00:29:45.825 bw ( KiB/s): min= 384, max= 1888, per=46.75%, avg=785.60, stdev=298.29, samples=20 00:29:45.825 iops : min= 96, max= 472, avg=196.40, stdev=74.57, samples=20 00:29:45.825 lat (usec) : 500=46.75%, 750=3.86% 00:29:45.825 lat (msec) : 2=0.20%, 50=49.19% 00:29:45.825 cpu : usr=96.66%, sys=2.89%, ctx=32, majf=0, minf=0 00:29:45.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.825 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:45.825 00:29:45.825 Run status group 0 (all jobs): 00:29:45.825 READ: bw=1679KiB/s (1719kB/s), 785KiB/s-897KiB/s (803kB/s-918kB/s), io=16.5MiB (17.3MB), run=10008-10034msec 00:29:45.825 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:45.825 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:45.825 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:45.825 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:45.825 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 00:29:45.826 real 0m11.219s 00:29:45.826 user 0m20.194s 00:29:45.826 sys 0m0.895s 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 ************************************ 00:29:45.826 END TEST fio_dif_1_multi_subsystems 00:29:45.826 ************************************ 00:29:45.826 07:15:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:45.826 07:15:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:45.826 07:15:53 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:45.826 07:15:53 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 ************************************ 00:29:45.826 START TEST fio_dif_rand_params 00:29:45.826 ************************************ 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 bdev_null0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:45.826 [2024-07-13 07:15:53.811032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:45.826 { 00:29:45.826 "params": { 00:29:45.826 "name": "Nvme$subsystem", 00:29:45.826 "trtype": "$TEST_TRANSPORT", 00:29:45.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.826 "adrfam": "ipv4", 00:29:45.826 "trsvcid": "$NVMF_PORT", 00:29:45.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.826 "hdgst": ${hdgst:-false}, 00:29:45.826 "ddgst": ${ddgst:-false} 00:29:45.826 }, 00:29:45.826 "method": "bdev_nvme_attach_controller" 00:29:45.826 } 00:29:45.826 EOF 00:29:45.826 )") 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:45.826 "params": { 00:29:45.826 "name": "Nvme0", 00:29:45.826 "trtype": "tcp", 00:29:45.826 "traddr": "10.0.0.2", 00:29:45.826 "adrfam": "ipv4", 00:29:45.826 "trsvcid": "4420", 00:29:45.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.826 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:45.826 "hdgst": false, 00:29:45.826 "ddgst": false 00:29:45.826 }, 00:29:45.826 "method": "bdev_nvme_attach_controller" 00:29:45.826 }' 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:45.826 07:15:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.084 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:46.084 ... 00:29:46.084 fio-3.35 00:29:46.084 Starting 3 threads 00:29:52.644 00:29:52.644 filename0: (groupid=0, jobs=1): err= 0: pid=117042: Sat Jul 13 07:15:59 2024 00:29:52.644 read: IOPS=338, BW=42.4MiB/s (44.4MB/s)(212MiB/5002msec) 00:29:52.644 slat (nsec): min=6312, max=46027, avg=11023.72, stdev=6462.57 00:29:52.644 clat (usec): min=3664, max=48697, avg=8825.00, stdev=4270.18 00:29:52.644 lat (usec): min=3671, max=48703, avg=8836.03, stdev=4271.00 00:29:52.644 clat percentiles (usec): 00:29:52.644 | 1.00th=[ 3720], 5.00th=[ 3818], 10.00th=[ 3884], 20.00th=[ 3982], 00:29:52.644 | 30.00th=[ 6915], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 9372], 00:29:52.644 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13435], 95.00th=[14091], 00:29:52.644 | 99.00th=[14746], 99.50th=[14877], 99.90th=[47449], 99.95th=[48497], 00:29:52.644 | 99.99th=[48497] 00:29:52.644 bw ( KiB/s): min=34560, max=54528, per=42.99%, avg=44458.67, stdev=7892.53, samples=9 00:29:52.644 iops : min= 270, max= 426, avg=347.33, stdev=61.66, samples=9 00:29:52.644 lat (msec) : 4=21.24%, 10=41.36%, 20=37.05%, 50=0.35% 00:29:52.644 cpu : usr=92.54%, sys=5.62%, ctx=10, majf=0, minf=0 00:29:52.644 IO depths : 1=30.4%, 2=69.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.644 issued rwts: total=1695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.644 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.644 filename0: (groupid=0, jobs=1): err= 0: pid=117043: Sat Jul 13 07:15:59 2024 00:29:52.644 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5004msec) 00:29:52.644 slat (nsec): min=6381, max=67346, avg=14389.98, stdev=7703.51 00:29:52.644 clat (usec): min=3640, max=53257, avg=14382.86, stdev=13999.00 00:29:52.644 lat (usec): min=3647, max=53266, avg=14397.25, stdev=13998.15 00:29:52.644 clat percentiles (usec): 00:29:52.644 | 1.00th=[ 3752], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 8160], 00:29:52.644 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:29:52.644 | 70.00th=[10028], 80.00th=[10421], 90.00th=[49021], 95.00th=[50070], 00:29:52.644 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52167], 99.95th=[53216], 00:29:52.644 | 99.99th=[53216] 00:29:52.644 bw ( KiB/s): min=15360, max=39680, per=25.28%, avg=26140.44, stdev=8391.03, samples=9 00:29:52.644 iops : min= 120, max= 310, avg=204.22, stdev=65.55, samples=9 00:29:52.644 lat (msec) : 4=1.44%, 10=68.91%, 20=16.12%, 50=8.93%, 100=4.61% 00:29:52.644 cpu : usr=94.80%, sys=3.98%, ctx=9, majf=0, minf=0 00:29:52.644 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.644 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.644 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.644 filename0: (groupid=0, jobs=1): err= 0: pid=117044: Sat Jul 13 07:15:59 2024 00:29:52.644 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5004msec) 00:29:52.644 slat (nsec): min=5481, max=76940, avg=13945.04, stdev=7805.76 00:29:52.644 clat (usec): min=3761, max=52748, avg=11473.38, stdev=9520.74 00:29:52.644 lat (usec): min=3771, max=52761, avg=11487.32, stdev=9520.98 00:29:52.644 clat percentiles (usec): 00:29:52.644 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7046], 00:29:52.644 | 30.00th=[ 7373], 40.00th=[ 8586], 50.00th=[ 9896], 60.00th=[10552], 00:29:52.644 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11994], 95.00th=[47449], 00:29:52.644 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:29:52.644 | 99.99th=[52691] 00:29:52.644 bw ( KiB/s): min=25856, max=38144, per=31.68%, avg=32768.00, stdev=4175.23, samples=9 00:29:52.644 iops : min= 202, max= 298, avg=256.00, stdev=32.62, samples=9 00:29:52.644 lat (msec) : 4=0.08%, 10=50.23%, 20=43.95%, 50=4.13%, 100=1.61% 00:29:52.644 cpu : usr=94.52%, sys=3.88%, ctx=1118, majf=0, minf=0 00:29:52.644 IO depths : 1=4.1%, 2=95.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.644 issued rwts: total=1306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.644 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.644 00:29:52.644 Run status group 0 (all jobs): 00:29:52.644 READ: bw=101MiB/s (106MB/s), 26.0MiB/s-42.4MiB/s (27.3MB/s-44.4MB/s), io=505MiB (530MB), run=5002-5004msec 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:52.644 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 bdev_null0 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 [2024-07-13 07:15:59.893904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 bdev_null1 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 bdev_null2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.645 { 00:29:52.645 "params": { 00:29:52.645 "name": "Nvme$subsystem", 00:29:52.645 "trtype": "$TEST_TRANSPORT", 00:29:52.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.645 "adrfam": "ipv4", 00:29:52.645 "trsvcid": "$NVMF_PORT", 00:29:52.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.645 "hdgst": ${hdgst:-false}, 00:29:52.645 "ddgst": ${ddgst:-false} 00:29:52.645 }, 00:29:52.645 "method": "bdev_nvme_attach_controller" 00:29:52.645 } 00:29:52.645 EOF 00:29:52.645 )") 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.645 { 00:29:52.645 "params": { 00:29:52.645 "name": "Nvme$subsystem", 00:29:52.645 "trtype": "$TEST_TRANSPORT", 00:29:52.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.645 "adrfam": "ipv4", 00:29:52.645 "trsvcid": "$NVMF_PORT", 00:29:52.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.645 "hdgst": ${hdgst:-false}, 00:29:52.645 "ddgst": ${ddgst:-false} 00:29:52.645 }, 00:29:52.645 "method": "bdev_nvme_attach_controller" 00:29:52.645 } 00:29:52.645 EOF 00:29:52.645 )") 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.645 { 00:29:52.645 "params": { 00:29:52.645 "name": "Nvme$subsystem", 00:29:52.645 "trtype": "$TEST_TRANSPORT", 00:29:52.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.645 "adrfam": "ipv4", 00:29:52.645 "trsvcid": "$NVMF_PORT", 00:29:52.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.645 "hdgst": ${hdgst:-false}, 00:29:52.645 "ddgst": ${ddgst:-false} 00:29:52.645 }, 00:29:52.645 "method": "bdev_nvme_attach_controller" 00:29:52.645 } 00:29:52.645 EOF 00:29:52.645 )") 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:52.645 07:15:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.645 "params": { 00:29:52.645 "name": "Nvme0", 00:29:52.646 "trtype": "tcp", 00:29:52.646 "traddr": "10.0.0.2", 00:29:52.646 "adrfam": "ipv4", 00:29:52.646 "trsvcid": "4420", 00:29:52.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.646 "hdgst": false, 00:29:52.646 "ddgst": false 00:29:52.646 }, 00:29:52.646 "method": "bdev_nvme_attach_controller" 00:29:52.646 },{ 00:29:52.646 "params": { 00:29:52.646 "name": "Nvme1", 00:29:52.646 "trtype": "tcp", 00:29:52.646 "traddr": "10.0.0.2", 00:29:52.646 "adrfam": "ipv4", 00:29:52.646 "trsvcid": "4420", 00:29:52.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.646 "hdgst": false, 00:29:52.646 "ddgst": false 00:29:52.646 }, 00:29:52.646 "method": "bdev_nvme_attach_controller" 00:29:52.646 },{ 00:29:52.646 "params": { 00:29:52.646 "name": "Nvme2", 00:29:52.646 "trtype": "tcp", 00:29:52.646 "traddr": "10.0.0.2", 00:29:52.646 "adrfam": "ipv4", 00:29:52.646 "trsvcid": "4420", 00:29:52.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:52.646 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:52.646 "hdgst": false, 00:29:52.646 "ddgst": false 00:29:52.646 }, 00:29:52.646 "method": "bdev_nvme_attach_controller" 00:29:52.646 }' 00:29:52.646 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:52.646 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:52.646 07:15:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:52.646 07:16:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.646 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:52.646 ... 00:29:52.646 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:52.646 ... 00:29:52.646 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:52.646 ... 00:29:52.646 fio-3.35 00:29:52.646 Starting 24 threads 00:30:10.736 00:30:10.736 filename0: (groupid=0, jobs=1): err= 0: pid=117139: Sat Jul 13 07:16:17 2024 00:30:10.736 read: IOPS=370, BW=1482KiB/s (1517kB/s)(14.5MiB/10028msec) 00:30:10.736 slat (usec): min=4, max=8020, avg=19.97, stdev=193.09 00:30:10.736 clat (msec): min=15, max=101, avg=43.02, stdev=14.50 00:30:10.736 lat (msec): min=15, max=101, avg=43.04, stdev=14.50 00:30:10.736 clat percentiles (msec): 00:30:10.736 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 32], 00:30:10.736 | 30.00th=[ 35], 40.00th=[ 39], 50.00th=[ 41], 60.00th=[ 45], 00:30:10.736 | 70.00th=[ 48], 80.00th=[ 56], 90.00th=[ 64], 95.00th=[ 70], 00:30:10.736 | 99.00th=[ 87], 99.50th=[ 94], 99.90th=[ 103], 99.95th=[ 103], 00:30:10.736 | 99.99th=[ 103] 00:30:10.736 bw ( KiB/s): min= 1024, max= 2016, per=2.91%, avg=1479.60, stdev=281.21, samples=20 00:30:10.736 iops : min= 256, max= 504, avg=369.90, stdev=70.30, samples=20 00:30:10.736 lat (msec) : 20=1.29%, 50=74.70%, 100=23.74%, 250=0.27% 00:30:10.736 cpu : usr=45.08%, sys=0.88%, ctx=1263, majf=0, minf=9 00:30:10.736 IO depths : 1=1.8%, 2=3.9%, 4=11.2%, 8=71.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:10.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.736 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.736 issued rwts: total=3715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.736 filename0: (groupid=0, jobs=1): err= 0: pid=117140: Sat Jul 13 07:16:17 2024 00:30:10.736 read: IOPS=838, BW=3353KiB/s (3433kB/s)(32.8MiB/10006msec) 00:30:10.736 slat (usec): min=3, max=8025, avg=12.23, stdev=87.82 00:30:10.736 clat (msec): min=7, max=134, avg=18.99, stdev=20.17 00:30:10.736 lat (msec): min=7, max=135, avg=19.01, stdev=20.17 00:30:10.736 clat percentiles (msec): 00:30:10.736 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:30:10.736 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:30:10.736 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 41], 95.00th=[ 70], 00:30:10.736 | 99.00th=[ 106], 99.50th=[ 115], 99.90th=[ 132], 99.95th=[ 136], 00:30:10.736 | 99.99th=[ 136] 00:30:10.736 bw ( KiB/s): min= 720, max= 6752, per=6.35%, avg=3221.58, stdev=2141.82, samples=19 00:30:10.736 iops : min= 180, max= 1688, avg=805.32, stdev=535.46, samples=19 00:30:10.736 lat (msec) : 10=33.06%, 20=49.16%, 50=9.34%, 100=6.93%, 250=1.51% 00:30:10.736 cpu : usr=67.35%, sys=1.66%, ctx=689, majf=0, minf=9 00:30:10.736 IO depths : 1=2.2%, 2=4.5%, 4=12.1%, 8=70.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:10.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.736 complete : 0=0.0%, 4=90.9%, 8=3.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.736 issued rwts: total=8387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.736 filename0: (groupid=0, jobs=1): err= 0: pid=117141: Sat Jul 13 07:16:17 2024 00:30:10.736 read: IOPS=391, BW=1564KiB/s (1602kB/s)(15.4MiB/10053msec) 00:30:10.736 slat (usec): min=3, max=8049, avg=26.34, stdev=325.70 00:30:10.736 clat (usec): min=1370, max=113213, avg=40653.54, stdev=18979.87 00:30:10.736 lat (usec): min=1377, max=113238, avg=40679.88, stdev=18990.75 00:30:10.736 clat percentiles (usec): 00:30:10.736 | 1.00th=[ 1450], 5.00th=[ 13829], 10.00th=[ 22152], 20.00th=[ 24511], 00:30:10.736 | 30.00th=[ 31065], 40.00th=[ 34866], 50.00th=[ 36963], 60.00th=[ 42206], 00:30:10.736 | 70.00th=[ 47973], 80.00th=[ 55837], 90.00th=[ 67634], 95.00th=[ 74974], 00:30:10.736 | 99.00th=[ 93848], 99.50th=[102237], 99.90th=[107480], 99.95th=[107480], 00:30:10.736 | 99.99th=[112722] 00:30:10.736 bw ( KiB/s): min= 896, max= 3416, per=3.09%, avg=1569.80, stdev=586.02, samples=20 00:30:10.736 iops : min= 224, max= 854, avg=392.45, stdev=146.50, samples=20 00:30:10.736 lat (msec) : 2=2.04%, 4=1.22%, 10=1.63%, 20=2.16%, 50=70.59% 00:30:10.736 lat (msec) : 100=21.80%, 250=0.56% 00:30:10.736 cpu : usr=37.30%, sys=0.50%, ctx=983, majf=0, minf=0 00:30:10.736 IO depths : 1=1.4%, 2=3.1%, 4=10.4%, 8=73.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:10.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.736 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.736 issued rwts: total=3931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.736 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.736 filename0: (groupid=0, jobs=1): err= 0: pid=117142: Sat Jul 13 07:16:17 2024 00:30:10.736 read: IOPS=671, BW=2687KiB/s (2751kB/s)(26.3MiB/10007msec) 00:30:10.736 slat (usec): min=3, max=8028, avg=15.06, stdev=162.33 00:30:10.736 clat (msec): min=7, max=168, avg=23.72, stdev=21.70 00:30:10.736 lat (msec): min=7, max=168, avg=23.73, stdev=21.70 00:30:10.736 clat percentiles (msec): 00:30:10.736 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 15], 00:30:10.736 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:30:10.736 | 70.00th=[ 17], 80.00th=[ 24], 90.00th=[ 61], 95.00th=[ 77], 00:30:10.736 | 99.00th=[ 108], 99.50th=[ 118], 99.90th=[ 169], 99.95th=[ 169], 00:30:10.736 | 99.99th=[ 169] 00:30:10.736 bw ( KiB/s): min= 640, max= 4536, per=5.00%, avg=2536.26, stdev=1457.43, samples=19 00:30:10.736 iops : min= 160, max= 1134, avg=634.05, stdev=364.35, samples=19 00:30:10.736 lat (msec) : 10=11.04%, 20=64.90%, 50=12.41%, 100=10.43%, 250=1.22% 00:30:10.736 cpu : usr=44.60%, sys=1.03%, ctx=1097, majf=0, minf=9 00:30:10.737 IO depths : 1=1.0%, 2=2.1%, 4=7.2%, 8=78.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=89.7%, 8=4.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=6721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename0: (groupid=0, jobs=1): err= 0: pid=117143: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=340, BW=1362KiB/s (1395kB/s)(13.4MiB/10044msec) 00:30:10.737 slat (usec): min=6, max=8028, avg=18.06, stdev=175.78 00:30:10.737 clat (msec): min=12, max=128, avg=46.82, stdev=17.08 00:30:10.737 lat (msec): min=12, max=128, avg=46.84, stdev=17.08 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 35], 00:30:10.737 | 30.00th=[ 36], 40.00th=[ 40], 50.00th=[ 46], 60.00th=[ 48], 00:30:10.737 | 70.00th=[ 54], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 75], 00:30:10.737 | 99.00th=[ 106], 99.50th=[ 120], 99.90th=[ 129], 99.95th=[ 129], 00:30:10.737 | 99.99th=[ 129] 00:30:10.737 bw ( KiB/s): min= 864, max= 1840, per=2.69%, avg=1364.00, stdev=289.15, samples=20 00:30:10.737 iops : min= 216, max= 460, avg=341.00, stdev=72.29, samples=20 00:30:10.737 lat (msec) : 20=0.70%, 50=67.54%, 100=30.64%, 250=1.11% 00:30:10.737 cpu : usr=39.08%, sys=0.58%, ctx=1131, majf=0, minf=9 00:30:10.737 IO depths : 1=1.4%, 2=3.0%, 4=10.5%, 8=73.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=3420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename0: (groupid=0, jobs=1): err= 0: pid=117144: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=893, BW=3573KiB/s (3659kB/s)(34.9MiB/10007msec) 00:30:10.737 slat (usec): min=3, max=4051, avg=10.33, stdev=43.07 00:30:10.737 clat (msec): min=6, max=151, avg=17.84, stdev=19.92 00:30:10.737 lat (msec): min=6, max=151, avg=17.85, stdev=19.92 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:30:10.737 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:30:10.737 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 32], 95.00th=[ 68], 00:30:10.737 | 99.00th=[ 102], 99.50th=[ 120], 99.90th=[ 138], 99.95th=[ 153], 00:30:10.737 | 99.99th=[ 153] 00:30:10.737 bw ( KiB/s): min= 752, max= 7224, per=6.91%, avg=3506.79, stdev=2332.79, samples=19 00:30:10.737 iops : min= 188, max= 1806, avg=876.63, stdev=583.22, samples=19 00:30:10.737 lat (msec) : 10=49.50%, 20=34.51%, 50=7.46%, 100=7.53%, 250=1.01% 00:30:10.737 cpu : usr=68.34%, sys=1.35%, ctx=976, majf=0, minf=9 00:30:10.737 IO depths : 1=0.9%, 2=1.9%, 4=4.3%, 8=81.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=89.7%, 8=4.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=8940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename0: (groupid=0, jobs=1): err= 0: pid=117145: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=351, BW=1404KiB/s (1438kB/s)(13.8MiB/10044msec) 00:30:10.737 slat (usec): min=4, max=8029, avg=16.42, stdev=190.95 00:30:10.737 clat (msec): min=7, max=130, avg=45.47, stdev=19.96 00:30:10.737 lat (msec): min=7, max=130, avg=45.49, stdev=19.96 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 29], 00:30:10.737 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 39], 60.00th=[ 48], 00:30:10.737 | 70.00th=[ 50], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 87], 00:30:10.737 | 99.00th=[ 110], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 131], 00:30:10.737 | 99.99th=[ 131] 00:30:10.737 bw ( KiB/s): min= 850, max= 2028, per=2.77%, avg=1403.90, stdev=398.29, samples=20 00:30:10.737 iops : min= 212, max= 507, avg=350.95, stdev=99.61, samples=20 00:30:10.737 lat (msec) : 10=0.65%, 20=0.62%, 50=69.80%, 100=27.57%, 250=1.36% 00:30:10.737 cpu : usr=32.61%, sys=0.58%, ctx=876, majf=0, minf=9 00:30:10.737 IO depths : 1=1.0%, 2=2.3%, 4=9.9%, 8=74.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=3526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename0: (groupid=0, jobs=1): err= 0: pid=117146: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=793, BW=3172KiB/s (3248kB/s)(31.0MiB/10004msec) 00:30:10.737 slat (usec): min=5, max=4028, avg=12.16, stdev=78.31 00:30:10.737 clat (msec): min=6, max=131, avg=20.10, stdev=19.73 00:30:10.737 lat (msec): min=6, max=131, avg=20.11, stdev=19.73 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:30:10.737 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 16], 00:30:10.737 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 55], 95.00th=[ 70], 00:30:10.737 | 99.00th=[ 94], 99.50th=[ 104], 99.90th=[ 132], 99.95th=[ 132], 00:30:10.737 | 99.99th=[ 132] 00:30:10.737 bw ( KiB/s): min= 768, max= 6528, per=6.18%, avg=3134.26, stdev=2064.33, samples=19 00:30:10.737 iops : min= 192, max= 1632, avg=783.53, stdev=516.11, samples=19 00:30:10.737 lat (msec) : 10=24.29%, 20=56.97%, 50=8.66%, 100=9.50%, 250=0.58% 00:30:10.737 cpu : usr=64.92%, sys=1.36%, ctx=981, majf=0, minf=9 00:30:10.737 IO depths : 1=1.6%, 2=3.3%, 4=10.3%, 8=73.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=90.3%, 8=4.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=7934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename1: (groupid=0, jobs=1): err= 0: pid=117147: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=352, BW=1409KiB/s (1443kB/s)(13.8MiB/10033msec) 00:30:10.737 slat (usec): min=4, max=9077, avg=21.96, stdev=262.06 00:30:10.737 clat (msec): min=15, max=117, avg=45.25, stdev=16.76 00:30:10.737 lat (msec): min=15, max=117, avg=45.27, stdev=16.76 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 21], 5.00th=[ 23], 10.00th=[ 26], 20.00th=[ 33], 00:30:10.737 | 30.00th=[ 35], 40.00th=[ 38], 50.00th=[ 43], 60.00th=[ 47], 00:30:10.737 | 70.00th=[ 52], 80.00th=[ 59], 90.00th=[ 67], 95.00th=[ 78], 00:30:10.737 | 99.00th=[ 99], 99.50th=[ 106], 99.90th=[ 117], 99.95th=[ 117], 00:30:10.737 | 99.99th=[ 117] 00:30:10.737 bw ( KiB/s): min= 944, max= 1952, per=2.77%, avg=1406.70, stdev=341.62, samples=20 00:30:10.737 iops : min= 236, max= 488, avg=351.65, stdev=85.40, samples=20 00:30:10.737 lat (msec) : 20=0.65%, 50=67.71%, 100=30.93%, 250=0.71% 00:30:10.737 cpu : usr=40.05%, sys=0.76%, ctx=1366, majf=0, minf=9 00:30:10.737 IO depths : 1=1.1%, 2=2.5%, 4=8.7%, 8=74.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=3534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename1: (groupid=0, jobs=1): err= 0: pid=117148: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=330, BW=1320KiB/s (1352kB/s)(12.9MiB/10027msec) 00:30:10.737 slat (usec): min=6, max=8003, avg=16.94, stdev=170.73 00:30:10.737 clat (msec): min=17, max=129, avg=48.33, stdev=17.73 00:30:10.737 lat (msec): min=17, max=129, avg=48.35, stdev=17.72 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 34], 00:30:10.737 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 46], 60.00th=[ 48], 00:30:10.737 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 87], 00:30:10.737 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 130], 00:30:10.737 | 99.99th=[ 130] 00:30:10.737 bw ( KiB/s): min= 768, max= 1744, per=2.60%, avg=1318.80, stdev=327.10, samples=20 00:30:10.737 iops : min= 192, max= 436, avg=329.65, stdev=81.77, samples=20 00:30:10.737 lat (msec) : 20=0.57%, 50=64.19%, 100=33.82%, 250=1.42% 00:30:10.737 cpu : usr=42.31%, sys=0.68%, ctx=1255, majf=0, minf=9 00:30:10.737 IO depths : 1=2.2%, 2=4.6%, 4=13.0%, 8=69.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=3309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename1: (groupid=0, jobs=1): err= 0: pid=117149: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=353, BW=1415KiB/s (1449kB/s)(13.9MiB/10060msec) 00:30:10.737 slat (usec): min=3, max=8029, avg=21.77, stdev=232.63 00:30:10.737 clat (msec): min=8, max=143, avg=45.09, stdev=20.94 00:30:10.737 lat (msec): min=8, max=143, avg=45.11, stdev=20.95 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 30], 00:30:10.737 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 40], 60.00th=[ 46], 00:30:10.737 | 70.00th=[ 51], 80.00th=[ 60], 90.00th=[ 72], 95.00th=[ 85], 00:30:10.737 | 99.00th=[ 121], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:30:10.737 | 99.99th=[ 144] 00:30:10.737 bw ( KiB/s): min= 816, max= 2272, per=2.79%, avg=1417.20, stdev=454.08, samples=20 00:30:10.737 iops : min= 204, max= 568, avg=354.30, stdev=113.52, samples=20 00:30:10.737 lat (msec) : 10=0.90%, 20=3.37%, 50=66.79%, 100=26.64%, 250=2.30% 00:30:10.737 cpu : usr=38.71%, sys=0.64%, ctx=1103, majf=0, minf=9 00:30:10.737 IO depths : 1=1.2%, 2=2.6%, 4=10.1%, 8=74.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=3559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename1: (groupid=0, jobs=1): err= 0: pid=117150: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=743, BW=2973KiB/s (3044kB/s)(29.0MiB/10002msec) 00:30:10.737 slat (usec): min=3, max=5058, avg=13.52, stdev=59.12 00:30:10.737 clat (msec): min=6, max=144, avg=21.42, stdev=21.17 00:30:10.737 lat (msec): min=6, max=144, avg=21.43, stdev=21.17 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:30:10.737 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:30:10.737 | 70.00th=[ 16], 80.00th=[ 20], 90.00th=[ 54], 95.00th=[ 74], 00:30:10.737 | 99.00th=[ 108], 99.50th=[ 117], 99.90th=[ 142], 99.95th=[ 146], 00:30:10.737 | 99.99th=[ 146] 00:30:10.737 bw ( KiB/s): min= 768, max= 5048, per=5.56%, avg=2822.37, stdev=1694.50, samples=19 00:30:10.737 iops : min= 192, max= 1262, avg=705.58, stdev=423.61, samples=19 00:30:10.737 lat (msec) : 10=14.29%, 20=65.89%, 50=9.58%, 100=8.85%, 250=1.40% 00:30:10.737 cpu : usr=66.74%, sys=1.56%, ctx=763, majf=0, minf=9 00:30:10.737 IO depths : 1=3.6%, 2=7.4%, 4=16.6%, 8=63.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:10.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.737 issued rwts: total=7434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.737 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.737 filename1: (groupid=0, jobs=1): err= 0: pid=117151: Sat Jul 13 07:16:17 2024 00:30:10.737 read: IOPS=930, BW=3724KiB/s (3813kB/s)(36.4MiB/10003msec) 00:30:10.737 slat (usec): min=3, max=266, avg=10.19, stdev= 6.39 00:30:10.737 clat (msec): min=3, max=152, avg=17.12, stdev=19.32 00:30:10.737 lat (msec): min=3, max=152, avg=17.13, stdev=19.32 00:30:10.737 clat percentiles (msec): 00:30:10.737 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:30:10.737 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:30:10.737 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 35], 95.00th=[ 68], 00:30:10.737 | 99.00th=[ 97], 99.50th=[ 109], 99.90th=[ 153], 99.95th=[ 153], 00:30:10.738 | 99.99th=[ 153] 00:30:10.738 bw ( KiB/s): min= 641, max= 7120, per=7.09%, avg=3595.05, stdev=2421.53, samples=19 00:30:10.738 iops : min= 160, max= 1780, avg=898.68, stdev=605.41, samples=19 00:30:10.738 lat (msec) : 4=0.39%, 10=47.47%, 20=37.78%, 50=6.99%, 100=6.39% 00:30:10.738 lat (msec) : 250=0.99% 00:30:10.738 cpu : usr=67.12%, sys=1.64%, ctx=611, majf=0, minf=9 00:30:10.738 IO depths : 1=0.7%, 2=1.4%, 4=8.2%, 8=77.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=9312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename1: (groupid=0, jobs=1): err= 0: pid=117152: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=712, BW=2852KiB/s (2920kB/s)(27.9MiB/10007msec) 00:30:10.738 slat (usec): min=3, max=4034, avg=12.84, stdev=67.31 00:30:10.738 clat (msec): min=6, max=141, avg=22.35, stdev=21.48 00:30:10.738 lat (msec): min=6, max=141, avg=22.36, stdev=21.48 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:10.738 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:30:10.738 | 70.00th=[ 17], 80.00th=[ 26], 90.00th=[ 59], 95.00th=[ 75], 00:30:10.738 | 99.00th=[ 101], 99.50th=[ 113], 99.90th=[ 129], 99.95th=[ 129], 00:30:10.738 | 99.99th=[ 142] 00:30:10.738 bw ( KiB/s): min= 768, max= 5632, per=5.36%, avg=2721.95, stdev=1839.00, samples=19 00:30:10.738 iops : min= 192, max= 1408, avg=680.47, stdev=459.76, samples=19 00:30:10.738 lat (msec) : 10=14.07%, 20=60.63%, 50=13.46%, 100=10.89%, 250=0.95% 00:30:10.738 cpu : usr=67.41%, sys=1.48%, ctx=858, majf=0, minf=9 00:30:10.738 IO depths : 1=3.0%, 2=6.0%, 4=14.5%, 8=66.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=91.4%, 8=3.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=7134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename1: (groupid=0, jobs=1): err= 0: pid=117153: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=720, BW=2882KiB/s (2951kB/s)(28.1MiB/10001msec) 00:30:10.738 slat (usec): min=3, max=7997, avg=15.00, stdev=125.47 00:30:10.738 clat (usec): min=1381, max=163936, avg=22096.12, stdev=21310.66 00:30:10.738 lat (usec): min=1389, max=163945, avg=22111.11, stdev=21313.89 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:30:10.738 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:30:10.738 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 58], 95.00th=[ 74], 00:30:10.738 | 99.00th=[ 110], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 159], 00:30:10.738 | 99.99th=[ 165] 00:30:10.738 bw ( KiB/s): min= 640, max= 4745, per=5.49%, avg=2787.37, stdev=1662.25, samples=19 00:30:10.738 iops : min= 160, max= 1186, avg=696.79, stdev=415.57, samples=19 00:30:10.738 lat (msec) : 2=0.89%, 4=0.22%, 10=9.22%, 20=68.40%, 50=10.92% 00:30:10.738 lat (msec) : 100=9.13%, 250=1.22% 00:30:10.738 cpu : usr=59.80%, sys=1.39%, ctx=882, majf=0, minf=9 00:30:10.738 IO depths : 1=3.1%, 2=6.5%, 4=16.1%, 8=64.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=91.6%, 8=2.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=7205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename1: (groupid=0, jobs=1): err= 0: pid=117154: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=867, BW=3469KiB/s (3552kB/s)(33.9MiB/10003msec) 00:30:10.738 slat (nsec): min=3877, max=57054, avg=11129.84, stdev=7398.68 00:30:10.738 clat (msec): min=3, max=142, avg=18.37, stdev=19.34 00:30:10.738 lat (msec): min=3, max=142, avg=18.38, stdev=19.34 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:30:10.738 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:30:10.738 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 39], 95.00th=[ 66], 00:30:10.738 | 99.00th=[ 103], 99.50th=[ 113], 99.90th=[ 136], 99.95th=[ 144], 00:30:10.738 | 99.99th=[ 144] 00:30:10.738 bw ( KiB/s): min= 768, max= 6656, per=6.78%, avg=3442.89, stdev=2251.10, samples=19 00:30:10.738 iops : min= 192, max= 1664, avg=860.68, stdev=562.79, samples=19 00:30:10.738 lat (msec) : 4=0.18%, 10=33.00%, 20=50.28%, 50=8.23%, 100=7.22% 00:30:10.738 lat (msec) : 250=1.08% 00:30:10.738 cpu : usr=71.47%, sys=1.53%, ctx=629, majf=0, minf=9 00:30:10.738 IO depths : 1=1.8%, 2=3.6%, 4=7.9%, 8=75.9%, 16=10.8%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=90.4%, 8=4.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=8675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename2: (groupid=0, jobs=1): err= 0: pid=117155: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10043msec) 00:30:10.738 slat (usec): min=5, max=8021, avg=19.35, stdev=184.05 00:30:10.738 clat (msec): min=14, max=121, avg=47.92, stdev=18.22 00:30:10.738 lat (msec): min=14, max=121, avg=47.93, stdev=18.23 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 29], 20.00th=[ 34], 00:30:10.738 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 48], 00:30:10.738 | 70.00th=[ 54], 80.00th=[ 63], 90.00th=[ 70], 95.00th=[ 89], 00:30:10.738 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 122], 99.95th=[ 122], 00:30:10.738 | 99.99th=[ 122] 00:30:10.738 bw ( KiB/s): min= 768, max= 1840, per=2.62%, avg=1328.30, stdev=337.10, samples=20 00:30:10.738 iops : min= 192, max= 460, avg=332.05, stdev=84.26, samples=20 00:30:10.738 lat (msec) : 20=1.41%, 50=62.56%, 100=34.77%, 250=1.26% 00:30:10.738 cpu : usr=40.73%, sys=0.66%, ctx=1363, majf=0, minf=9 00:30:10.738 IO depths : 1=1.2%, 2=2.7%, 4=10.0%, 8=73.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=3339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename2: (groupid=0, jobs=1): err= 0: pid=117156: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=646, BW=2587KiB/s (2649kB/s)(25.3MiB/10007msec) 00:30:10.738 slat (usec): min=3, max=8036, avg=18.86, stdev=217.28 00:30:10.738 clat (msec): min=6, max=137, avg=24.62, stdev=21.12 00:30:10.738 lat (msec): min=6, max=137, avg=24.64, stdev=21.12 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:30:10.738 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:30:10.738 | 70.00th=[ 18], 80.00th=[ 25], 90.00th=[ 61], 95.00th=[ 75], 00:30:10.738 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 138], 00:30:10.738 | 99.99th=[ 138] 00:30:10.738 bw ( KiB/s): min= 768, max= 4144, per=4.99%, avg=2532.37, stdev=1407.38, samples=19 00:30:10.738 iops : min= 192, max= 1036, avg=633.05, stdev=351.86, samples=19 00:30:10.738 lat (msec) : 10=1.48%, 20=72.48%, 50=14.43%, 100=10.20%, 250=1.41% 00:30:10.738 cpu : usr=44.36%, sys=0.89%, ctx=1128, majf=0, minf=9 00:30:10.738 IO depths : 1=1.3%, 2=2.7%, 4=9.2%, 8=75.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=90.0%, 8=4.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=6472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename2: (groupid=0, jobs=1): err= 0: pid=117157: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=344, BW=1376KiB/s (1409kB/s)(13.5MiB/10061msec) 00:30:10.738 slat (usec): min=4, max=8038, avg=14.23, stdev=136.62 00:30:10.738 clat (msec): min=9, max=141, avg=46.39, stdev=18.69 00:30:10.738 lat (msec): min=9, max=141, avg=46.41, stdev=18.68 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 34], 00:30:10.738 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 46], 60.00th=[ 48], 00:30:10.738 | 70.00th=[ 57], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 83], 00:30:10.738 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 132], 99.95th=[ 142], 00:30:10.738 | 99.99th=[ 142] 00:30:10.738 bw ( KiB/s): min= 864, max= 2091, per=2.71%, avg=1377.40, stdev=371.11, samples=20 00:30:10.738 iops : min= 216, max= 522, avg=344.30, stdev=92.71, samples=20 00:30:10.738 lat (msec) : 10=0.46%, 20=1.39%, 50=65.79%, 100=31.03%, 250=1.33% 00:30:10.738 cpu : usr=32.62%, sys=0.61%, ctx=874, majf=0, minf=9 00:30:10.738 IO depths : 1=0.4%, 2=0.9%, 4=7.8%, 8=77.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=3461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename2: (groupid=0, jobs=1): err= 0: pid=117158: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=387, BW=1549KiB/s (1586kB/s)(15.2MiB/10055msec) 00:30:10.738 slat (usec): min=4, max=8028, avg=18.40, stdev=182.92 00:30:10.738 clat (msec): min=7, max=143, avg=41.12, stdev=18.13 00:30:10.738 lat (msec): min=7, max=143, avg=41.14, stdev=18.13 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 25], 00:30:10.738 | 30.00th=[ 32], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 41], 00:30:10.738 | 70.00th=[ 47], 80.00th=[ 55], 90.00th=[ 67], 95.00th=[ 80], 00:30:10.738 | 99.00th=[ 99], 99.50th=[ 115], 99.90th=[ 144], 99.95th=[ 144], 00:30:10.738 | 99.99th=[ 144] 00:30:10.738 bw ( KiB/s): min= 888, max= 2320, per=3.06%, avg=1551.00, stdev=450.08, samples=20 00:30:10.738 iops : min= 222, max= 580, avg=387.75, stdev=112.52, samples=20 00:30:10.738 lat (msec) : 10=0.41%, 20=3.88%, 50=73.05%, 100=21.96%, 250=0.69% 00:30:10.738 cpu : usr=39.29%, sys=0.72%, ctx=1100, majf=0, minf=9 00:30:10.738 IO depths : 1=0.8%, 2=1.7%, 4=7.9%, 8=77.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:10.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.738 issued rwts: total=3893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.738 filename2: (groupid=0, jobs=1): err= 0: pid=117159: Sat Jul 13 07:16:17 2024 00:30:10.738 read: IOPS=330, BW=1323KiB/s (1355kB/s)(12.9MiB/10020msec) 00:30:10.738 slat (usec): min=6, max=9995, avg=20.07, stdev=221.17 00:30:10.738 clat (msec): min=18, max=127, avg=48.24, stdev=18.39 00:30:10.738 lat (msec): min=18, max=127, avg=48.26, stdev=18.39 00:30:10.738 clat percentiles (msec): 00:30:10.738 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 34], 00:30:10.738 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 48], 00:30:10.738 | 70.00th=[ 55], 80.00th=[ 62], 90.00th=[ 70], 95.00th=[ 85], 00:30:10.738 | 99.00th=[ 113], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:30:10.738 | 99.99th=[ 128] 00:30:10.739 bw ( KiB/s): min= 768, max= 1768, per=2.60%, avg=1319.20, stdev=298.33, samples=20 00:30:10.739 iops : min= 192, max= 442, avg=329.80, stdev=74.58, samples=20 00:30:10.739 lat (msec) : 20=0.24%, 50=63.97%, 100=33.77%, 250=2.02% 00:30:10.739 cpu : usr=37.70%, sys=0.74%, ctx=1311, majf=0, minf=10 00:30:10.739 IO depths : 1=1.2%, 2=2.4%, 4=8.9%, 8=74.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:10.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 complete : 0=0.0%, 4=90.1%, 8=5.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 issued rwts: total=3314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.739 filename2: (groupid=0, jobs=1): err= 0: pid=117160: Sat Jul 13 07:16:17 2024 00:30:10.739 read: IOPS=322, BW=1290KiB/s (1321kB/s)(12.6MiB/10020msec) 00:30:10.739 slat (usec): min=5, max=4042, avg=14.20, stdev=71.28 00:30:10.739 clat (msec): min=20, max=135, avg=49.50, stdev=18.63 00:30:10.739 lat (msec): min=20, max=135, avg=49.52, stdev=18.63 00:30:10.739 clat percentiles (msec): 00:30:10.739 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 35], 00:30:10.739 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 48], 00:30:10.739 | 70.00th=[ 57], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 87], 00:30:10.739 | 99.00th=[ 105], 99.50th=[ 120], 99.90th=[ 136], 99.95th=[ 136], 00:30:10.739 | 99.99th=[ 136] 00:30:10.739 bw ( KiB/s): min= 768, max= 1664, per=2.53%, avg=1286.00, stdev=324.98, samples=20 00:30:10.739 iops : min= 192, max= 416, avg=321.45, stdev=81.21, samples=20 00:30:10.739 lat (msec) : 50=64.20%, 100=34.47%, 250=1.33% 00:30:10.739 cpu : usr=42.43%, sys=0.78%, ctx=1393, majf=0, minf=9 00:30:10.739 IO depths : 1=0.9%, 2=2.3%, 4=8.3%, 8=75.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:10.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 complete : 0=0.0%, 4=90.3%, 8=5.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 issued rwts: total=3232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.739 filename2: (groupid=0, jobs=1): err= 0: pid=117161: Sat Jul 13 07:16:17 2024 00:30:10.739 read: IOPS=373, BW=1495KiB/s (1531kB/s)(14.6MiB/10027msec) 00:30:10.739 slat (usec): min=6, max=8033, avg=18.51, stdev=185.83 00:30:10.739 clat (msec): min=15, max=118, avg=42.62, stdev=15.82 00:30:10.739 lat (msec): min=15, max=118, avg=42.64, stdev=15.82 00:30:10.739 clat percentiles (msec): 00:30:10.739 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 30], 00:30:10.739 | 30.00th=[ 33], 40.00th=[ 36], 50.00th=[ 41], 60.00th=[ 45], 00:30:10.739 | 70.00th=[ 48], 80.00th=[ 56], 90.00th=[ 64], 95.00th=[ 71], 00:30:10.739 | 99.00th=[ 88], 99.50th=[ 99], 99.90th=[ 118], 99.95th=[ 118], 00:30:10.739 | 99.99th=[ 118] 00:30:10.739 bw ( KiB/s): min= 920, max= 2152, per=2.95%, avg=1495.30, stdev=354.93, samples=20 00:30:10.739 iops : min= 230, max= 538, avg=373.75, stdev=88.68, samples=20 00:30:10.739 lat (msec) : 20=2.43%, 50=72.97%, 100=24.26%, 250=0.35% 00:30:10.739 cpu : usr=42.37%, sys=0.90%, ctx=1137, majf=0, minf=9 00:30:10.739 IO depths : 1=1.4%, 2=3.2%, 4=10.1%, 8=73.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:10.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 issued rwts: total=3747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.739 filename2: (groupid=0, jobs=1): err= 0: pid=117162: Sat Jul 13 07:16:17 2024 00:30:10.739 read: IOPS=342, BW=1368KiB/s (1401kB/s)(13.4MiB/10032msec) 00:30:10.739 slat (usec): min=5, max=8027, avg=19.35, stdev=211.78 00:30:10.739 clat (msec): min=18, max=117, avg=46.63, stdev=16.99 00:30:10.739 lat (msec): min=18, max=117, avg=46.65, stdev=17.00 00:30:10.739 clat percentiles (msec): 00:30:10.739 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 34], 00:30:10.739 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 45], 60.00th=[ 48], 00:30:10.739 | 70.00th=[ 55], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 80], 00:30:10.739 | 99.00th=[ 100], 99.50th=[ 107], 99.90th=[ 118], 99.95th=[ 118], 00:30:10.739 | 99.99th=[ 118] 00:30:10.739 bw ( KiB/s): min= 904, max= 1936, per=2.69%, avg=1367.90, stdev=323.89, samples=20 00:30:10.739 iops : min= 226, max= 484, avg=341.95, stdev=80.95, samples=20 00:30:10.739 lat (msec) : 20=0.26%, 50=67.18%, 100=31.83%, 250=0.73% 00:30:10.739 cpu : usr=32.29%, sys=0.69%, ctx=1064, majf=0, minf=9 00:30:10.739 IO depths : 1=0.8%, 2=2.2%, 4=9.2%, 8=75.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:10.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.739 issued rwts: total=3431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.739 00:30:10.739 Run status group 0 (all jobs): 00:30:10.739 READ: bw=49.6MiB/s (52.0MB/s), 1290KiB/s-3724KiB/s (1321kB/s-3813kB/s), io=499MiB (523MB), run=10001-10061msec 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 bdev_null0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 [2024-07-13 07:16:17.422075] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 bdev_null1 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.739 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:10.740 { 00:30:10.740 "params": { 00:30:10.740 "name": "Nvme$subsystem", 00:30:10.740 "trtype": "$TEST_TRANSPORT", 00:30:10.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.740 "adrfam": "ipv4", 00:30:10.740 "trsvcid": "$NVMF_PORT", 00:30:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.740 "hdgst": ${hdgst:-false}, 00:30:10.740 "ddgst": ${ddgst:-false} 00:30:10.740 }, 00:30:10.740 "method": "bdev_nvme_attach_controller" 00:30:10.740 } 00:30:10.740 EOF 00:30:10.740 )") 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:10.740 { 00:30:10.740 "params": { 00:30:10.740 "name": "Nvme$subsystem", 00:30:10.740 "trtype": "$TEST_TRANSPORT", 00:30:10.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.740 "adrfam": "ipv4", 00:30:10.740 "trsvcid": "$NVMF_PORT", 00:30:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.740 "hdgst": ${hdgst:-false}, 00:30:10.740 "ddgst": ${ddgst:-false} 00:30:10.740 }, 00:30:10.740 "method": "bdev_nvme_attach_controller" 00:30:10.740 } 00:30:10.740 EOF 00:30:10.740 )") 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:10.740 "params": { 00:30:10.740 "name": "Nvme0", 00:30:10.740 "trtype": "tcp", 00:30:10.740 "traddr": "10.0.0.2", 00:30:10.740 "adrfam": "ipv4", 00:30:10.740 "trsvcid": "4420", 00:30:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:10.740 "hdgst": false, 00:30:10.740 "ddgst": false 00:30:10.740 }, 00:30:10.740 "method": "bdev_nvme_attach_controller" 00:30:10.740 },{ 00:30:10.740 "params": { 00:30:10.740 "name": "Nvme1", 00:30:10.740 "trtype": "tcp", 00:30:10.740 "traddr": "10.0.0.2", 00:30:10.740 "adrfam": "ipv4", 00:30:10.740 "trsvcid": "4420", 00:30:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:10.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:10.740 "hdgst": false, 00:30:10.740 "ddgst": false 00:30:10.740 }, 00:30:10.740 "method": "bdev_nvme_attach_controller" 00:30:10.740 }' 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:10.740 07:16:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.740 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:10.740 ... 00:30:10.740 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:10.740 ... 00:30:10.740 fio-3.35 00:30:10.740 Starting 4 threads 00:30:16.022 00:30:16.022 filename0: (groupid=0, jobs=1): err= 0: pid=117341: Sat Jul 13 07:16:23 2024 00:30:16.022 read: IOPS=2185, BW=17.1MiB/s (17.9MB/s)(85.4MiB/5001msec) 00:30:16.022 slat (nsec): min=6194, max=85913, avg=9960.33, stdev=6869.32 00:30:16.022 clat (usec): min=1036, max=7077, avg=3609.78, stdev=212.32 00:30:16.022 lat (usec): min=1043, max=7100, avg=3619.74, stdev=212.81 00:30:16.022 clat percentiles (usec): 00:30:16.022 | 1.00th=[ 3326], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3490], 00:30:16.022 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:30:16.022 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3916], 00:30:16.022 | 99.00th=[ 4146], 99.50th=[ 4228], 99.90th=[ 5276], 99.95th=[ 7046], 00:30:16.022 | 99.99th=[ 7046] 00:30:16.022 bw ( KiB/s): min=16896, max=18304, per=25.00%, avg=17464.89, stdev=448.01, samples=9 00:30:16.022 iops : min= 2112, max= 2288, avg=2183.11, stdev=56.00, samples=9 00:30:16.022 lat (msec) : 2=0.15%, 4=97.38%, 10=2.47% 00:30:16.022 cpu : usr=95.48%, sys=3.40%, ctx=30, majf=0, minf=9 00:30:16.022 IO depths : 1=11.2%, 2=24.0%, 4=50.9%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 issued rwts: total=10928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.022 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.022 filename0: (groupid=0, jobs=1): err= 0: pid=117342: Sat Jul 13 07:16:23 2024 00:30:16.022 read: IOPS=2182, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5003msec) 00:30:16.022 slat (usec): min=6, max=103, avg=19.57, stdev=10.81 00:30:16.022 clat (usec): min=2682, max=5722, avg=3561.13, stdev=187.86 00:30:16.022 lat (usec): min=2695, max=5772, avg=3580.71, stdev=189.29 00:30:16.022 clat percentiles (usec): 00:30:16.022 | 1.00th=[ 3261], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:30:16.022 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:30:16.022 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3884], 00:30:16.022 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 5473], 99.95th=[ 5669], 00:30:16.022 | 99.99th=[ 5735] 00:30:16.022 bw ( KiB/s): min=16896, max=18304, per=25.00%, avg=17462.70, stdev=420.96, samples=10 00:30:16.022 iops : min= 2112, max= 2288, avg=2182.80, stdev=52.60, samples=10 00:30:16.022 lat (msec) : 4=98.24%, 10=1.76% 00:30:16.022 cpu : usr=94.88%, sys=3.78%, ctx=77, majf=0, minf=9 00:30:16.022 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 issued rwts: total=10920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.022 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.022 filename1: (groupid=0, jobs=1): err= 0: pid=117343: Sat Jul 13 07:16:23 2024 00:30:16.022 read: IOPS=2183, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5002msec) 00:30:16.022 slat (usec): min=6, max=103, avg=18.79, stdev=10.77 00:30:16.022 clat (usec): min=2145, max=8529, avg=3564.80, stdev=218.84 00:30:16.022 lat (usec): min=2165, max=8535, avg=3583.59, stdev=219.98 00:30:16.022 clat percentiles (usec): 00:30:16.022 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:30:16.022 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:30:16.022 | 70.00th=[ 3621], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3884], 00:30:16.022 | 99.00th=[ 4113], 99.50th=[ 4424], 99.90th=[ 5473], 99.95th=[ 5669], 00:30:16.022 | 99.99th=[ 7046] 00:30:16.022 bw ( KiB/s): min=17008, max=18304, per=24.98%, avg=17450.67, stdev=422.18, samples=9 00:30:16.022 iops : min= 2126, max= 2288, avg=2181.33, stdev=52.77, samples=9 00:30:16.022 lat (msec) : 4=97.99%, 10=2.01% 00:30:16.022 cpu : usr=94.80%, sys=3.92%, ctx=11, majf=0, minf=9 00:30:16.022 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 issued rwts: total=10920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.022 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.022 filename1: (groupid=0, jobs=1): err= 0: pid=117344: Sat Jul 13 07:16:23 2024 00:30:16.022 read: IOPS=2182, BW=17.0MiB/s (17.9MB/s)(85.3MiB/5004msec) 00:30:16.022 slat (nsec): min=6281, max=88912, avg=13120.87, stdev=8590.98 00:30:16.022 clat (usec): min=2716, max=6603, avg=3607.36, stdev=190.47 00:30:16.022 lat (usec): min=2736, max=6625, avg=3620.48, stdev=189.95 00:30:16.022 clat percentiles (usec): 00:30:16.022 | 1.00th=[ 3294], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:30:16.022 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:30:16.022 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3916], 00:30:16.022 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 5538], 99.95th=[ 6128], 00:30:16.022 | 99.99th=[ 6194] 00:30:16.022 bw ( KiB/s): min=16896, max=18176, per=25.00%, avg=17459.20, stdev=401.16, samples=10 00:30:16.022 iops : min= 2112, max= 2272, avg=2182.40, stdev=50.14, samples=10 00:30:16.022 lat (msec) : 4=97.64%, 10=2.36% 00:30:16.022 cpu : usr=95.86%, sys=2.98%, ctx=9, majf=0, minf=9 00:30:16.022 IO depths : 1=12.2%, 2=24.9%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.022 issued rwts: total=10920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.022 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.022 00:30:16.022 Run status group 0 (all jobs): 00:30:16.022 READ: bw=68.2MiB/s (71.5MB/s), 17.0MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=341MiB (358MB), run=5001-5004msec 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.022 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 00:30:16.023 real 0m29.866s 00:30:16.023 user 3m0.281s 00:30:16.023 sys 0m4.671s 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:16.023 ************************************ 00:30:16.023 07:16:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 END TEST fio_dif_rand_params 00:30:16.023 ************************************ 00:30:16.023 07:16:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:16.023 07:16:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:16.023 07:16:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:16.023 07:16:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 ************************************ 00:30:16.023 START TEST fio_dif_digest 00:30:16.023 ************************************ 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 bdev_null0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:16.023 [2024-07-13 07:16:23.737633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:16.023 { 00:30:16.023 "params": { 00:30:16.023 "name": "Nvme$subsystem", 00:30:16.023 "trtype": "$TEST_TRANSPORT", 00:30:16.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.023 "adrfam": "ipv4", 00:30:16.023 "trsvcid": "$NVMF_PORT", 00:30:16.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.023 "hdgst": ${hdgst:-false}, 00:30:16.023 "ddgst": ${ddgst:-false} 00:30:16.023 }, 00:30:16.023 "method": "bdev_nvme_attach_controller" 00:30:16.023 } 00:30:16.023 EOF 00:30:16.023 )") 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:16.023 "params": { 00:30:16.023 "name": "Nvme0", 00:30:16.023 "trtype": "tcp", 00:30:16.023 "traddr": "10.0.0.2", 00:30:16.023 "adrfam": "ipv4", 00:30:16.023 "trsvcid": "4420", 00:30:16.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.023 "hdgst": true, 00:30:16.023 "ddgst": true 00:30:16.023 }, 00:30:16.023 "method": "bdev_nvme_attach_controller" 00:30:16.023 }' 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.023 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:16.024 07:16:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.024 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:16.024 ... 00:30:16.024 fio-3.35 00:30:16.024 Starting 3 threads 00:30:28.266 00:30:28.266 filename0: (groupid=0, jobs=1): err= 0: pid=117450: Sat Jul 13 07:16:34 2024 00:30:28.266 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(310MiB/10042msec) 00:30:28.266 slat (nsec): min=6569, max=91634, avg=18159.96, stdev=6888.47 00:30:28.266 clat (usec): min=8157, max=53814, avg=12134.64, stdev=7265.54 00:30:28.266 lat (usec): min=8168, max=53836, avg=12152.80, stdev=7265.47 00:30:28.266 clat percentiles (usec): 00:30:28.266 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:30:28.266 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:30:28.266 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12518], 00:30:28.266 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[53216], 00:30:28.266 | 99.99th=[53740] 00:30:28.266 bw ( KiB/s): min=24320, max=36096, per=35.37%, avg=31680.00, stdev=3354.20, samples=20 00:30:28.266 iops : min= 190, max= 282, avg=247.50, stdev=26.20, samples=20 00:30:28.266 lat (msec) : 10=14.73%, 20=82.00%, 50=0.20%, 100=3.07% 00:30:28.267 cpu : usr=94.37%, sys=4.26%, ctx=16, majf=0, minf=0 00:30:28.267 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.267 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.267 filename0: (groupid=0, jobs=1): err= 0: pid=117451: Sat Jul 13 07:16:34 2024 00:30:28.267 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(306MiB/10004msec) 00:30:28.267 slat (nsec): min=6683, max=62711, avg=14753.58, stdev=6395.78 00:30:28.267 clat (usec): min=6458, max=16426, avg=12240.62, stdev=2066.13 00:30:28.267 lat (usec): min=6469, max=16450, avg=12255.38, stdev=2066.77 00:30:28.267 clat percentiles (usec): 00:30:28.267 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[11338], 00:30:28.267 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:30:28.267 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14222], 95.00th=[14746], 00:30:28.267 | 99.00th=[15401], 99.50th=[15533], 99.90th=[16188], 99.95th=[16319], 00:30:28.267 | 99.99th=[16450] 00:30:28.267 bw ( KiB/s): min=28928, max=35584, per=35.05%, avg=31397.05, stdev=1900.14, samples=19 00:30:28.267 iops : min= 226, max= 278, avg=245.26, stdev=14.84, samples=19 00:30:28.267 lat (msec) : 10=15.85%, 20=84.15% 00:30:28.267 cpu : usr=93.84%, sys=4.59%, ctx=24, majf=0, minf=0 00:30:28.267 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.267 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.267 filename0: (groupid=0, jobs=1): err= 0: pid=117452: Sat Jul 13 07:16:34 2024 00:30:28.267 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10005msec) 00:30:28.267 slat (nsec): min=6558, max=88467, avg=15025.64, stdev=6580.44 00:30:28.267 clat (usec): min=6332, max=18395, avg=14264.27, stdev=1957.09 00:30:28.267 lat (usec): min=6342, max=18415, avg=14279.30, stdev=1958.43 00:30:28.267 clat percentiles (usec): 00:30:28.267 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[13698], 00:30:28.267 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:30:28.267 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:30:28.267 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18220], 99.95th=[18220], 00:30:28.267 | 99.99th=[18482] 00:30:28.267 bw ( KiB/s): min=24576, max=29952, per=30.10%, avg=26960.84, stdev=1533.75, samples=19 00:30:28.267 iops : min= 192, max= 234, avg=210.63, stdev=11.98, samples=19 00:30:28.267 lat (msec) : 10=7.66%, 20=92.34% 00:30:28.267 cpu : usr=94.02%, sys=4.56%, ctx=8, majf=0, minf=0 00:30:28.267 IO depths : 1=6.5%, 2=93.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.267 issued rwts: total=2101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.267 00:30:28.267 Run status group 0 (all jobs): 00:30:28.267 READ: bw=87.5MiB/s (91.7MB/s), 26.2MiB/s-30.8MiB/s (27.5MB/s-32.3MB/s), io=878MiB (921MB), run=10004-10042msec 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.267 00:30:28.267 real 0m11.092s 00:30:28.267 user 0m28.974s 00:30:28.267 sys 0m1.642s 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.267 07:16:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:28.267 ************************************ 00:30:28.267 END TEST fio_dif_digest 00:30:28.267 ************************************ 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:28.267 07:16:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:28.267 07:16:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.267 rmmod nvme_tcp 00:30:28.267 rmmod nvme_fabrics 00:30:28.267 rmmod nvme_keyring 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 116661 ']' 00:30:28.267 07:16:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 116661 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 116661 ']' 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 116661 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116661 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.267 killing process with pid 116661 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116661' 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@967 -- # kill 116661 00:30:28.267 07:16:34 nvmf_dif -- common/autotest_common.sh@972 -- # wait 116661 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:28.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:28.267 Waiting for block devices as requested 00:30:28.267 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.267 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.267 07:16:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:28.267 07:16:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.267 07:16:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:28.267 ************************************ 00:30:28.267 END TEST nvmf_dif 00:30:28.267 ************************************ 00:30:28.267 00:30:28.267 real 1m5.735s 00:30:28.267 user 4m50.112s 00:30:28.267 sys 0m14.612s 00:30:28.267 07:16:35 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.267 07:16:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:28.267 07:16:35 -- common/autotest_common.sh@1142 -- # return 0 00:30:28.267 07:16:35 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:28.267 07:16:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:28.267 07:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:28.267 07:16:35 -- common/autotest_common.sh@10 -- # set +x 00:30:28.267 ************************************ 00:30:28.267 START TEST nvmf_abort_qd_sizes 00:30:28.267 ************************************ 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:28.267 * Looking for test storage... 00:30:28.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.267 07:16:35 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:28.268 07:16:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:28.268 Cannot find device "nvmf_tgt_br" 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:28.268 Cannot find device "nvmf_tgt_br2" 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:28.268 Cannot find device "nvmf_tgt_br" 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:28.268 Cannot find device "nvmf_tgt_br2" 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:28.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:28.268 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:28.268 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:28.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:30:28.527 00:30:28.527 --- 10.0.0.2 ping statistics --- 00:30:28.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.527 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:28.527 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:28.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:28.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:30:28.527 00:30:28.527 --- 10.0.0.3 ping statistics --- 00:30:28.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.527 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:28.527 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:28.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:30:28.527 00:30:28.527 --- 10.0.0.1 ping statistics --- 00:30:28.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.527 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:30:28.527 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.527 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:28.527 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:28.527 07:16:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:29.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:29.092 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:29.092 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:29.349 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.349 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.349 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=118036 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 118036 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 118036 ']' 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.350 07:16:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:29.350 [2024-07-13 07:16:37.283472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:29.350 [2024-07-13 07:16:37.283589] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.608 [2024-07-13 07:16:37.425844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:29.608 [2024-07-13 07:16:37.536929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.608 [2024-07-13 07:16:37.537001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.608 [2024-07-13 07:16:37.537015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.608 [2024-07-13 07:16:37.537026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.608 [2024-07-13 07:16:37.537036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.608 [2024-07-13 07:16:37.537198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.608 [2024-07-13 07:16:37.537597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.608 [2024-07-13 07:16:37.538284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.608 [2024-07-13 07:16:37.538332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.542 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 ************************************ 00:30:30.543 START TEST spdk_target_abort 00:30:30.543 ************************************ 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 spdk_targetn1 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 [2024-07-13 07:16:38.441986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:30.543 [2024-07-13 07:16:38.470198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:30.543 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:30.544 07:16:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:33.827 Initializing NVMe Controllers 00:30:33.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:33.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:33.827 Initialization complete. Launching workers. 00:30:33.827 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10508, failed: 0 00:30:33.827 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1167, failed to submit 9341 00:30:33.827 success 732, unsuccess 435, failed 0 00:30:33.827 07:16:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:33.827 07:16:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:37.106 Initializing NVMe Controllers 00:30:37.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:37.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:37.106 Initialization complete. Launching workers. 00:30:37.106 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5993, failed: 0 00:30:37.106 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 4771 00:30:37.106 success 288, unsuccess 934, failed 0 00:30:37.106 07:16:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:37.106 07:16:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.388 Initializing NVMe Controllers 00:30:40.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:40.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:40.388 Initialization complete. Launching workers. 00:30:40.388 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30302, failed: 0 00:30:40.388 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2651, failed to submit 27651 00:30:40.388 success 450, unsuccess 2201, failed 0 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.388 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 118036 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 118036 ']' 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 118036 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118036 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:40.645 killing process with pid 118036 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118036' 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 118036 00:30:40.645 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 118036 00:30:40.902 00:30:40.902 real 0m10.590s 00:30:40.902 user 0m43.149s 00:30:40.902 sys 0m1.819s 00:30:40.902 ************************************ 00:30:40.902 END TEST spdk_target_abort 00:30:40.902 ************************************ 00:30:40.902 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:40.902 07:16:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:41.160 07:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:41.160 07:16:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:41.160 07:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:41.160 07:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.160 07:16:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:41.160 ************************************ 00:30:41.160 START TEST kernel_target_abort 00:30:41.160 ************************************ 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:41.160 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:41.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:41.418 Waiting for block devices as requested 00:30:41.418 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:41.675 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:41.676 No valid GPT data, bailing 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:41.676 No valid GPT data, bailing 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:41.676 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:41.934 No valid GPT data, bailing 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:41.934 No valid GPT data, bailing 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd --hostid=43021b44-defc-4eee-995c-65b6e79138bd -a 10.0.0.1 -t tcp -s 4420 00:30:41.934 00:30:41.934 Discovery Log Number of Records 2, Generation counter 2 00:30:41.934 =====Discovery Log Entry 0====== 00:30:41.934 trtype: tcp 00:30:41.934 adrfam: ipv4 00:30:41.934 subtype: current discovery subsystem 00:30:41.934 treq: not specified, sq flow control disable supported 00:30:41.934 portid: 1 00:30:41.934 trsvcid: 4420 00:30:41.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:41.934 traddr: 10.0.0.1 00:30:41.934 eflags: none 00:30:41.934 sectype: none 00:30:41.934 =====Discovery Log Entry 1====== 00:30:41.934 trtype: tcp 00:30:41.934 adrfam: ipv4 00:30:41.934 subtype: nvme subsystem 00:30:41.934 treq: not specified, sq flow control disable supported 00:30:41.934 portid: 1 00:30:41.934 trsvcid: 4420 00:30:41.934 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:41.934 traddr: 10.0.0.1 00:30:41.934 eflags: none 00:30:41.934 sectype: none 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:41.934 07:16:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:45.220 Initializing NVMe Controllers 00:30:45.220 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.220 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:45.220 Initialization complete. Launching workers. 00:30:45.220 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38198, failed: 0 00:30:45.220 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38198, failed to submit 0 00:30:45.220 success 0, unsuccess 38198, failed 0 00:30:45.220 07:16:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:45.220 07:16:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.506 Initializing NVMe Controllers 00:30:48.506 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:48.506 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:48.506 Initialization complete. Launching workers. 00:30:48.506 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75132, failed: 0 00:30:48.506 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31564, failed to submit 43568 00:30:48.506 success 0, unsuccess 31564, failed 0 00:30:48.506 07:16:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:48.506 07:16:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:51.796 Initializing NVMe Controllers 00:30:51.796 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:51.796 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:51.796 Initialization complete. Launching workers. 00:30:51.796 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81571, failed: 0 00:30:51.796 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20346, failed to submit 61225 00:30:51.796 success 0, unsuccess 20346, failed 0 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:51.796 07:16:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:52.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:53.297 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:53.297 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:53.297 00:30:53.297 real 0m12.259s 00:30:53.297 user 0m5.689s 00:30:53.297 sys 0m3.703s 00:30:53.297 07:17:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:53.297 07:17:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:53.297 ************************************ 00:30:53.297 END TEST kernel_target_abort 00:30:53.297 ************************************ 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:53.297 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:53.297 rmmod nvme_tcp 00:30:53.555 rmmod nvme_fabrics 00:30:53.555 rmmod nvme_keyring 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 118036 ']' 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 118036 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 118036 ']' 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 118036 00:30:53.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (118036) - No such process 00:30:53.555 Process with pid 118036 is not found 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 118036 is not found' 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:53.555 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:53.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:53.816 Waiting for block devices as requested 00:30:53.816 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:54.096 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:54.096 07:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.096 07:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:54.096 00:30:54.096 real 0m26.122s 00:30:54.096 user 0m50.017s 00:30:54.096 sys 0m6.883s 00:30:54.096 07:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:54.096 ************************************ 00:30:54.096 END TEST nvmf_abort_qd_sizes 00:30:54.096 07:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:54.096 ************************************ 00:30:54.096 07:17:02 -- common/autotest_common.sh@1142 -- # return 0 00:30:54.096 07:17:02 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:54.096 07:17:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:54.096 07:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:54.096 07:17:02 -- common/autotest_common.sh@10 -- # set +x 00:30:54.096 ************************************ 00:30:54.096 START TEST keyring_file 00:30:54.096 ************************************ 00:30:54.096 07:17:02 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:54.096 * Looking for test storage... 00:30:54.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:54.096 07:17:02 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:54.096 07:17:02 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.096 07:17:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:54.373 07:17:02 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.373 07:17:02 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.373 07:17:02 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.373 07:17:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.373 07:17:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.373 07:17:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.373 07:17:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:54.373 07:17:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:54.373 07:17:02 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:54.373 07:17:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8BWxuezCSt 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8BWxuezCSt 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8BWxuezCSt 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.8BWxuezCSt 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HDdbG1fXZa 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:54.374 07:17:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HDdbG1fXZa 00:30:54.374 07:17:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HDdbG1fXZa 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HDdbG1fXZa 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=118911 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.374 07:17:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 118911 00:30:54.374 07:17:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 118911 ']' 00:30:54.374 07:17:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.374 07:17:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:54.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.374 07:17:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.374 07:17:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:54.374 07:17:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:54.374 [2024-07-13 07:17:02.363772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:54.374 [2024-07-13 07:17:02.363887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118911 ] 00:30:54.631 [2024-07-13 07:17:02.501721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.631 [2024-07-13 07:17:02.626241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:55.564 07:17:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.564 [2024-07-13 07:17:03.386883] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.564 null0 00:30:55.564 [2024-07-13 07:17:03.418841] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:55.564 [2024-07-13 07:17:03.419041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:55.564 [2024-07-13 07:17:03.426846] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.564 07:17:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.564 [2024-07-13 07:17:03.438847] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:55.564 2024/07/13 07:17:03 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:30:55.564 request: 00:30:55.564 { 00:30:55.564 "method": "nvmf_subsystem_add_listener", 00:30:55.564 "params": { 00:30:55.564 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.564 "secure_channel": false, 00:30:55.564 "listen_address": { 00:30:55.564 "trtype": "tcp", 00:30:55.564 "traddr": "127.0.0.1", 00:30:55.564 "trsvcid": "4420" 00:30:55.564 } 00:30:55.564 } 00:30:55.564 } 00:30:55.564 Got JSON-RPC error response 00:30:55.564 GoRPCClient: error on JSON-RPC call 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:55.564 07:17:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:55.564 07:17:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=118945 00:30:55.564 07:17:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 118945 /var/tmp/bperf.sock 00:30:55.564 07:17:03 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:55.565 07:17:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 118945 ']' 00:30:55.565 07:17:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:55.565 07:17:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:55.565 07:17:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:55.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:55.565 07:17:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:55.565 07:17:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:55.565 [2024-07-13 07:17:03.493423] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:55.565 [2024-07-13 07:17:03.493516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118945 ] 00:30:55.565 [2024-07-13 07:17:03.629784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.823 [2024-07-13 07:17:03.715988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.759 07:17:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:56.759 07:17:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:56.759 07:17:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:30:56.759 07:17:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:30:56.759 07:17:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HDdbG1fXZa 00:30:56.759 07:17:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HDdbG1fXZa 00:30:57.017 07:17:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:57.017 07:17:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:57.017 07:17:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.017 07:17:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.017 07:17:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.275 07:17:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.8BWxuezCSt == \/\t\m\p\/\t\m\p\.\8\B\W\x\u\e\z\C\S\t ]] 00:30:57.275 07:17:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:57.275 07:17:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:57.275 07:17:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.275 07:17:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.275 07:17:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.533 07:17:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.HDdbG1fXZa == \/\t\m\p\/\t\m\p\.\H\D\d\b\G\1\f\X\Z\a ]] 00:30:57.533 07:17:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:57.533 07:17:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.533 07:17:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.533 07:17:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.533 07:17:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.533 07:17:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.791 07:17:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:57.791 07:17:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:57.791 07:17:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:57.791 07:17:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.791 07:17:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.791 07:17:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.791 07:17:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.791 07:17:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:57.791 07:17:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:57.791 07:17:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.050 [2024-07-13 07:17:06.040235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:58.050 nvme0n1 00:30:58.308 07:17:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:58.308 07:17:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:58.308 07:17:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.308 07:17:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:58.567 07:17:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:58.567 07:17:06 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:58.825 Running I/O for 1 seconds... 00:30:59.762 00:30:59.762 Latency(us) 00:30:59.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.762 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:59.762 nvme0n1 : 1.01 12694.89 49.59 0.00 0.00 10053.16 5362.04 19184.17 00:30:59.762 =================================================================================================================== 00:30:59.762 Total : 12694.89 49.59 0.00 0.00 10053.16 5362.04 19184.17 00:30:59.762 0 00:30:59.762 07:17:07 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:59.762 07:17:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:00.021 07:17:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:00.021 07:17:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.021 07:17:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.021 07:17:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.021 07:17:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.021 07:17:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.280 07:17:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:00.280 07:17:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:00.280 07:17:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:00.280 07:17:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.280 07:17:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.280 07:17:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:00.280 07:17:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.539 07:17:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:00.539 07:17:08 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:00.539 07:17:08 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:00.539 07:17:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:00.797 [2024-07-13 07:17:08.703827] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:00.797 [2024-07-13 07:17:08.704201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483660 (107): Transport endpoint is not connected 00:31:00.797 [2024-07-13 07:17:08.705177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483660 (9): Bad file descriptor 00:31:00.798 [2024-07-13 07:17:08.706173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:00.798 [2024-07-13 07:17:08.706197] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:00.798 [2024-07-13 07:17:08.706223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:00.798 2024/07/13 07:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:00.798 request: 00:31:00.798 { 00:31:00.798 "method": "bdev_nvme_attach_controller", 00:31:00.798 "params": { 00:31:00.798 "name": "nvme0", 00:31:00.798 "trtype": "tcp", 00:31:00.798 "traddr": "127.0.0.1", 00:31:00.798 "adrfam": "ipv4", 00:31:00.798 "trsvcid": "4420", 00:31:00.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.798 "prchk_reftag": false, 00:31:00.798 "prchk_guard": false, 00:31:00.798 "hdgst": false, 00:31:00.798 "ddgst": false, 00:31:00.798 "psk": "key1" 00:31:00.798 } 00:31:00.798 } 00:31:00.798 Got JSON-RPC error response 00:31:00.798 GoRPCClient: error on JSON-RPC call 00:31:00.798 07:17:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:00.798 07:17:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:00.798 07:17:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:00.798 07:17:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:00.798 07:17:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:00.798 07:17:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.798 07:17:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.798 07:17:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.798 07:17:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.798 07:17:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:01.056 07:17:08 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:01.056 07:17:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:01.056 07:17:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:01.056 07:17:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:01.057 07:17:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:01.057 07:17:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.057 07:17:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:01.315 07:17:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:01.315 07:17:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:01.315 07:17:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:01.315 07:17:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:01.315 07:17:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:01.574 07:17:09 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:01.574 07:17:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:01.574 07:17:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.833 07:17:09 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:01.833 07:17:09 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.8BWxuezCSt 00:31:01.833 07:17:09 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:01.833 07:17:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:31:01.833 07:17:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:31:02.091 [2024-07-13 07:17:10.037630] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8BWxuezCSt': 0100660 00:31:02.091 [2024-07-13 07:17:10.037674] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:02.091 2024/07/13 07:17:10 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.8BWxuezCSt], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:02.091 request: 00:31:02.091 { 00:31:02.091 "method": "keyring_file_add_key", 00:31:02.091 "params": { 00:31:02.091 "name": "key0", 00:31:02.091 "path": "/tmp/tmp.8BWxuezCSt" 00:31:02.091 } 00:31:02.091 } 00:31:02.091 Got JSON-RPC error response 00:31:02.091 GoRPCClient: error on JSON-RPC call 00:31:02.091 07:17:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:02.091 07:17:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.091 07:17:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.091 07:17:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.091 07:17:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.8BWxuezCSt 00:31:02.091 07:17:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:31:02.092 07:17:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8BWxuezCSt 00:31:02.350 07:17:10 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.8BWxuezCSt 00:31:02.350 07:17:10 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:02.350 07:17:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:02.350 07:17:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:02.350 07:17:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:02.350 07:17:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:02.350 07:17:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:02.608 07:17:10 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:02.608 07:17:10 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.608 07:17:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:02.608 07:17:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:02.867 [2024-07-13 07:17:10.725784] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.8BWxuezCSt': No such file or directory 00:31:02.867 [2024-07-13 07:17:10.725827] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:02.867 [2024-07-13 07:17:10.725851] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:02.867 [2024-07-13 07:17:10.725860] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:02.867 [2024-07-13 07:17:10.725868] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:02.867 2024/07/13 07:17:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:02.867 request: 00:31:02.867 { 00:31:02.867 "method": "bdev_nvme_attach_controller", 00:31:02.867 "params": { 00:31:02.867 "name": "nvme0", 00:31:02.867 "trtype": "tcp", 00:31:02.867 "traddr": "127.0.0.1", 00:31:02.867 "adrfam": "ipv4", 00:31:02.867 "trsvcid": "4420", 00:31:02.867 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.867 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.867 "prchk_reftag": false, 00:31:02.867 "prchk_guard": false, 00:31:02.867 "hdgst": false, 00:31:02.867 "ddgst": false, 00:31:02.867 "psk": "key0" 00:31:02.867 } 00:31:02.867 } 00:31:02.867 Got JSON-RPC error response 00:31:02.867 GoRPCClient: error on JSON-RPC call 00:31:02.867 07:17:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:02.867 07:17:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.867 07:17:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.867 07:17:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.867 07:17:10 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:02.867 07:17:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:03.126 07:17:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fFXMt2E7lf 00:31:03.126 07:17:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:03.126 07:17:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:03.126 07:17:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:03.126 07:17:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:03.127 07:17:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:03.127 07:17:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:03.127 07:17:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:03.127 07:17:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fFXMt2E7lf 00:31:03.127 07:17:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fFXMt2E7lf 00:31:03.127 07:17:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.fFXMt2E7lf 00:31:03.127 07:17:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fFXMt2E7lf 00:31:03.127 07:17:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fFXMt2E7lf 00:31:03.385 07:17:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.385 07:17:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.644 nvme0n1 00:31:03.644 07:17:11 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:03.644 07:17:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.644 07:17:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.644 07:17:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.644 07:17:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.644 07:17:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.902 07:17:11 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:03.902 07:17:11 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:03.902 07:17:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:04.161 07:17:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:04.161 07:17:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:04.161 07:17:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.161 07:17:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.161 07:17:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.419 07:17:12 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:04.419 07:17:12 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:04.419 07:17:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:04.419 07:17:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.419 07:17:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.419 07:17:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.419 07:17:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.678 07:17:12 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:04.678 07:17:12 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:04.678 07:17:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:04.937 07:17:12 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:04.937 07:17:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:04.937 07:17:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.937 07:17:12 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:04.937 07:17:12 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fFXMt2E7lf 00:31:04.937 07:17:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fFXMt2E7lf 00:31:05.194 07:17:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HDdbG1fXZa 00:31:05.194 07:17:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HDdbG1fXZa 00:31:05.452 07:17:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.452 07:17:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.710 nvme0n1 00:31:05.710 07:17:13 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:05.710 07:17:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:05.968 07:17:14 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:05.968 "subsystems": [ 00:31:05.968 { 00:31:05.968 "subsystem": "keyring", 00:31:05.968 "config": [ 00:31:05.968 { 00:31:05.968 "method": "keyring_file_add_key", 00:31:05.968 "params": { 00:31:05.968 "name": "key0", 00:31:05.968 "path": "/tmp/tmp.fFXMt2E7lf" 00:31:05.968 } 00:31:05.968 }, 00:31:05.968 { 00:31:05.968 "method": "keyring_file_add_key", 00:31:05.968 "params": { 00:31:05.968 "name": "key1", 00:31:05.968 "path": "/tmp/tmp.HDdbG1fXZa" 00:31:05.968 } 00:31:05.968 } 00:31:05.968 ] 00:31:05.968 }, 00:31:05.968 { 00:31:05.968 "subsystem": "iobuf", 00:31:05.968 "config": [ 00:31:05.968 { 00:31:05.968 "method": "iobuf_set_options", 00:31:05.968 "params": { 00:31:05.969 "large_bufsize": 135168, 00:31:05.969 "large_pool_count": 1024, 00:31:05.969 "small_bufsize": 8192, 00:31:05.969 "small_pool_count": 8192 00:31:05.969 } 00:31:05.969 } 00:31:05.969 ] 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "subsystem": "sock", 00:31:05.969 "config": [ 00:31:05.969 { 00:31:05.969 "method": "sock_set_default_impl", 00:31:05.969 "params": { 00:31:05.969 "impl_name": "posix" 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "sock_impl_set_options", 00:31:05.969 "params": { 00:31:05.969 "enable_ktls": false, 00:31:05.969 "enable_placement_id": 0, 00:31:05.969 "enable_quickack": false, 00:31:05.969 "enable_recv_pipe": true, 00:31:05.969 "enable_zerocopy_send_client": false, 00:31:05.969 "enable_zerocopy_send_server": true, 00:31:05.969 "impl_name": "ssl", 00:31:05.969 "recv_buf_size": 4096, 00:31:05.969 "send_buf_size": 4096, 00:31:05.969 "tls_version": 0, 00:31:05.969 "zerocopy_threshold": 0 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "sock_impl_set_options", 00:31:05.969 "params": { 00:31:05.969 "enable_ktls": false, 00:31:05.969 "enable_placement_id": 0, 00:31:05.969 "enable_quickack": false, 00:31:05.969 "enable_recv_pipe": true, 00:31:05.969 "enable_zerocopy_send_client": false, 00:31:05.969 "enable_zerocopy_send_server": true, 00:31:05.969 "impl_name": "posix", 00:31:05.969 "recv_buf_size": 2097152, 00:31:05.969 "send_buf_size": 2097152, 00:31:05.969 "tls_version": 0, 00:31:05.969 "zerocopy_threshold": 0 00:31:05.969 } 00:31:05.969 } 00:31:05.969 ] 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "subsystem": "vmd", 00:31:05.969 "config": [] 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "subsystem": "accel", 00:31:05.969 "config": [ 00:31:05.969 { 00:31:05.969 "method": "accel_set_options", 00:31:05.969 "params": { 00:31:05.969 "buf_count": 2048, 00:31:05.969 "large_cache_size": 16, 00:31:05.969 "sequence_count": 2048, 00:31:05.969 "small_cache_size": 128, 00:31:05.969 "task_count": 2048 00:31:05.969 } 00:31:05.969 } 00:31:05.969 ] 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "subsystem": "bdev", 00:31:05.969 "config": [ 00:31:05.969 { 00:31:05.969 "method": "bdev_set_options", 00:31:05.969 "params": { 00:31:05.969 "bdev_auto_examine": true, 00:31:05.969 "bdev_io_cache_size": 256, 00:31:05.969 "bdev_io_pool_size": 65535, 00:31:05.969 "iobuf_large_cache_size": 16, 00:31:05.969 "iobuf_small_cache_size": 128 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "bdev_raid_set_options", 00:31:05.969 "params": { 00:31:05.969 "process_window_size_kb": 1024 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "bdev_iscsi_set_options", 00:31:05.969 "params": { 00:31:05.969 "timeout_sec": 30 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "bdev_nvme_set_options", 00:31:05.969 "params": { 00:31:05.969 "action_on_timeout": "none", 00:31:05.969 "allow_accel_sequence": false, 00:31:05.969 "arbitration_burst": 0, 00:31:05.969 "bdev_retry_count": 3, 00:31:05.969 "ctrlr_loss_timeout_sec": 0, 00:31:05.969 "delay_cmd_submit": true, 00:31:05.969 "dhchap_dhgroups": [ 00:31:05.969 "null", 00:31:05.969 "ffdhe2048", 00:31:05.969 "ffdhe3072", 00:31:05.969 "ffdhe4096", 00:31:05.969 "ffdhe6144", 00:31:05.969 "ffdhe8192" 00:31:05.969 ], 00:31:05.969 "dhchap_digests": [ 00:31:05.969 "sha256", 00:31:05.969 "sha384", 00:31:05.969 "sha512" 00:31:05.969 ], 00:31:05.969 "disable_auto_failback": false, 00:31:05.969 "fast_io_fail_timeout_sec": 0, 00:31:05.969 "generate_uuids": false, 00:31:05.969 "high_priority_weight": 0, 00:31:05.969 "io_path_stat": false, 00:31:05.969 "io_queue_requests": 512, 00:31:05.969 "keep_alive_timeout_ms": 10000, 00:31:05.969 "low_priority_weight": 0, 00:31:05.969 "medium_priority_weight": 0, 00:31:05.969 "nvme_adminq_poll_period_us": 10000, 00:31:05.969 "nvme_error_stat": false, 00:31:05.969 "nvme_ioq_poll_period_us": 0, 00:31:05.969 "rdma_cm_event_timeout_ms": 0, 00:31:05.969 "rdma_max_cq_size": 0, 00:31:05.969 "rdma_srq_size": 0, 00:31:05.969 "reconnect_delay_sec": 0, 00:31:05.969 "timeout_admin_us": 0, 00:31:05.969 "timeout_us": 0, 00:31:05.969 "transport_ack_timeout": 0, 00:31:05.969 "transport_retry_count": 4, 00:31:05.969 "transport_tos": 0 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "bdev_nvme_attach_controller", 00:31:05.969 "params": { 00:31:05.969 "adrfam": "IPv4", 00:31:05.969 "ctrlr_loss_timeout_sec": 0, 00:31:05.969 "ddgst": false, 00:31:05.969 "fast_io_fail_timeout_sec": 0, 00:31:05.969 "hdgst": false, 00:31:05.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.969 "name": "nvme0", 00:31:05.969 "prchk_guard": false, 00:31:05.969 "prchk_reftag": false, 00:31:05.969 "psk": "key0", 00:31:05.969 "reconnect_delay_sec": 0, 00:31:05.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.969 "traddr": "127.0.0.1", 00:31:05.969 "trsvcid": "4420", 00:31:05.969 "trtype": "TCP" 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "bdev_nvme_set_hotplug", 00:31:05.969 "params": { 00:31:05.969 "enable": false, 00:31:05.969 "period_us": 100000 00:31:05.969 } 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "method": "bdev_wait_for_examine" 00:31:05.969 } 00:31:05.969 ] 00:31:05.969 }, 00:31:05.969 { 00:31:05.969 "subsystem": "nbd", 00:31:05.969 "config": [] 00:31:05.969 } 00:31:05.969 ] 00:31:05.969 }' 00:31:05.969 07:17:14 keyring_file -- keyring/file.sh@114 -- # killprocess 118945 00:31:05.969 07:17:14 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 118945 ']' 00:31:05.969 07:17:14 keyring_file -- common/autotest_common.sh@952 -- # kill -0 118945 00:31:05.969 07:17:14 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:05.969 07:17:14 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:05.970 07:17:14 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118945 00:31:05.970 killing process with pid 118945 00:31:05.970 Received shutdown signal, test time was about 1.000000 seconds 00:31:05.970 00:31:05.970 Latency(us) 00:31:05.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.970 =================================================================================================================== 00:31:05.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.970 07:17:14 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:05.970 07:17:14 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:05.970 07:17:14 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118945' 00:31:05.970 07:17:14 keyring_file -- common/autotest_common.sh@967 -- # kill 118945 00:31:05.970 07:17:14 keyring_file -- common/autotest_common.sh@972 -- # wait 118945 00:31:06.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.228 07:17:14 keyring_file -- keyring/file.sh@117 -- # bperfpid=119400 00:31:06.228 07:17:14 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:06.228 07:17:14 keyring_file -- keyring/file.sh@119 -- # waitforlisten 119400 /var/tmp/bperf.sock 00:31:06.228 07:17:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 119400 ']' 00:31:06.228 07:17:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.228 07:17:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:06.228 07:17:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.228 07:17:14 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:06.228 "subsystems": [ 00:31:06.228 { 00:31:06.228 "subsystem": "keyring", 00:31:06.228 "config": [ 00:31:06.228 { 00:31:06.228 "method": "keyring_file_add_key", 00:31:06.228 "params": { 00:31:06.228 "name": "key0", 00:31:06.228 "path": "/tmp/tmp.fFXMt2E7lf" 00:31:06.228 } 00:31:06.228 }, 00:31:06.228 { 00:31:06.228 "method": "keyring_file_add_key", 00:31:06.228 "params": { 00:31:06.228 "name": "key1", 00:31:06.228 "path": "/tmp/tmp.HDdbG1fXZa" 00:31:06.228 } 00:31:06.228 } 00:31:06.228 ] 00:31:06.228 }, 00:31:06.228 { 00:31:06.228 "subsystem": "iobuf", 00:31:06.228 "config": [ 00:31:06.228 { 00:31:06.228 "method": "iobuf_set_options", 00:31:06.228 "params": { 00:31:06.228 "large_bufsize": 135168, 00:31:06.228 "large_pool_count": 1024, 00:31:06.228 "small_bufsize": 8192, 00:31:06.228 "small_pool_count": 8192 00:31:06.228 } 00:31:06.228 } 00:31:06.228 ] 00:31:06.228 }, 00:31:06.228 { 00:31:06.228 "subsystem": "sock", 00:31:06.228 "config": [ 00:31:06.228 { 00:31:06.228 "method": "sock_set_default_impl", 00:31:06.228 "params": { 00:31:06.228 "impl_name": "posix" 00:31:06.228 } 00:31:06.228 }, 00:31:06.228 { 00:31:06.228 "method": "sock_impl_set_options", 00:31:06.228 "params": { 00:31:06.228 "enable_ktls": false, 00:31:06.228 "enable_placement_id": 0, 00:31:06.228 "enable_quickack": false, 00:31:06.228 "enable_recv_pipe": true, 00:31:06.228 "enable_zerocopy_send_client": false, 00:31:06.228 "enable_zerocopy_send_server": true, 00:31:06.228 "impl_name": "ssl", 00:31:06.228 "recv_buf_size": 4096, 00:31:06.228 "send_buf_size": 4096, 00:31:06.228 "tls_version": 0, 00:31:06.228 "zerocopy_threshold": 0 00:31:06.228 } 00:31:06.228 }, 00:31:06.228 { 00:31:06.228 "method": "sock_impl_set_options", 00:31:06.228 "params": { 00:31:06.228 "enable_ktls": false, 00:31:06.228 "enable_placement_id": 0, 00:31:06.228 "enable_quickack": false, 00:31:06.228 "enable_recv_pipe": true, 00:31:06.228 "enable_zerocopy_send_client": false, 00:31:06.228 "enable_zerocopy_send_server": true, 00:31:06.228 "impl_name": "posix", 00:31:06.228 "recv_buf_size": 2097152, 00:31:06.228 "send_buf_size": 2097152, 00:31:06.228 "tls_version": 0, 00:31:06.228 "zerocopy_threshold": 0 00:31:06.228 } 00:31:06.228 } 00:31:06.228 ] 00:31:06.228 }, 00:31:06.228 { 00:31:06.228 "subsystem": "vmd", 00:31:06.228 "config": [] 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "subsystem": "accel", 00:31:06.229 "config": [ 00:31:06.229 { 00:31:06.229 "method": "accel_set_options", 00:31:06.229 "params": { 00:31:06.229 "buf_count": 2048, 00:31:06.229 "large_cache_size": 16, 00:31:06.229 "sequence_count": 2048, 00:31:06.229 "small_cache_size": 128, 00:31:06.229 "task_count": 2048 00:31:06.229 } 00:31:06.229 } 00:31:06.229 ] 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "subsystem": "bdev", 00:31:06.229 "config": [ 00:31:06.229 { 00:31:06.229 "method": "bdev_set_options", 00:31:06.229 "params": { 00:31:06.229 "bdev_auto_examine": true, 00:31:06.229 "bdev_io_cache_size": 256, 00:31:06.229 "bdev_io_pool_size": 65535, 00:31:06.229 "iobuf_large_cache_size": 16, 00:31:06.229 "iobuf_small_cache_size": 128 00:31:06.229 } 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "method": "bdev_raid_set_options", 00:31:06.229 "params": { 00:31:06.229 "process_window_size_kb": 1024 00:31:06.229 } 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "method": "bdev_iscsi_set_options", 00:31:06.229 "params": { 00:31:06.229 "timeout_sec": 30 00:31:06.229 } 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "method": "bdev_nvme_set_options", 00:31:06.229 "params": { 00:31:06.229 "action_on_timeout": "none", 00:31:06.229 "allow_accel_sequence": false, 00:31:06.229 "arbitration_burst": 0, 00:31:06.229 "bdev_retry_count": 3, 00:31:06.229 "ctrlr_loss_timeout_sec": 0, 00:31:06.229 "delay_cmd_submit": true, 00:31:06.229 "dhchap_dhgroups": [ 00:31:06.229 "null", 00:31:06.229 "ffdhe2048", 00:31:06.229 "ffdhe3072", 00:31:06.229 "ffdhe4096", 00:31:06.229 "ffdhe6144", 00:31:06.229 "ffdhe8192" 00:31:06.229 ], 00:31:06.229 "dhchap_digests": [ 00:31:06.229 "sha256", 00:31:06.229 "sha384", 00:31:06.229 "sha512" 00:31:06.229 ], 00:31:06.229 "disable_auto_failback": false, 00:31:06.229 "fast_io_fail_timeout_sec": 0, 00:31:06.229 "generate_uuids": false, 00:31:06.229 "high_priority_weight": 0, 00:31:06.229 "io_path_stat": false, 00:31:06.229 "io_queue_requests": 512, 00:31:06.229 "keep_alive_timeout_ms": 10000, 00:31:06.229 "low_priority_weight": 0, 00:31:06.229 "medium_priority_weight": 0, 00:31:06.229 "nvme_adminq_poll_period_us": 10000, 00:31:06.229 "nvme_error_stat": false, 00:31:06.229 "nvme_ioq_poll_period_us": 0, 00:31:06.229 "rdma_cm_event_timeout_ms": 0, 00:31:06.229 "rdma_max_cq_size": 0, 00:31:06.229 "rdma_srq_size": 0, 00:31:06.229 "reconnect_delay_sec": 0, 00:31:06.229 "timeout_admin_us": 0, 00:31:06.229 "timeout_us": 0, 00:31:06.229 "transport_ack_timeout": 0, 00:31:06.229 "transport_retry_count": 4, 00:31:06.229 "transport_tos": 0 00:31:06.229 } 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "method": "bdev_nvme_attach_controller", 00:31:06.229 "params": { 00:31:06.229 "adrfam": "IPv4", 00:31:06.229 "ctrlr_loss_timeout_sec": 0, 00:31:06.229 "ddgst": false, 00:31:06.229 "fast_io_fail_timeout_sec": 0, 00:31:06.229 "hdgst": false, 00:31:06.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.229 "name": "nvme0", 00:31:06.229 "prchk_guard": false, 00:31:06.229 "prchk_reftag": false, 00:31:06.229 "psk": "key0", 00:31:06.229 "reconnect_delay_sec": 0, 00:31:06.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.229 "traddr": "127.0.0.1", 00:31:06.229 "trsvcid": "4420", 00:31:06.229 "trtype": "TCP" 00:31:06.229 } 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "method": "bdev_nvme_set_hotplug", 00:31:06.229 "params": { 00:31:06.229 "enable": false, 00:31:06.229 "period_us": 100000 00:31:06.229 } 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "method": "bdev_wait_for_examine" 00:31:06.229 } 00:31:06.229 ] 00:31:06.229 }, 00:31:06.229 { 00:31:06.229 "subsystem": "nbd", 00:31:06.229 "config": [] 00:31:06.229 } 00:31:06.229 ] 00:31:06.229 }' 00:31:06.229 07:17:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:06.229 07:17:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:06.229 [2024-07-13 07:17:14.274677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:06.229 [2024-07-13 07:17:14.274756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119400 ] 00:31:06.487 [2024-07-13 07:17:14.405533] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.487 [2024-07-13 07:17:14.490369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.746 [2024-07-13 07:17:14.667476] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:07.312 07:17:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:07.312 07:17:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:07.312 07:17:15 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:07.312 07:17:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.312 07:17:15 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:07.571 07:17:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:07.571 07:17:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:07.571 07:17:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:07.571 07:17:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.571 07:17:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.571 07:17:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.571 07:17:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:07.832 07:17:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:07.832 07:17:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:07.832 07:17:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:07.832 07:17:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.832 07:17:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:07.832 07:17:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.832 07:17:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.089 07:17:15 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:08.089 07:17:15 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:08.089 07:17:15 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:08.089 07:17:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:08.089 07:17:16 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:08.089 07:17:16 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:08.089 07:17:16 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fFXMt2E7lf /tmp/tmp.HDdbG1fXZa 00:31:08.089 07:17:16 keyring_file -- keyring/file.sh@20 -- # killprocess 119400 00:31:08.089 07:17:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 119400 ']' 00:31:08.090 07:17:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 119400 00:31:08.090 07:17:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:08.090 07:17:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:08.090 07:17:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119400 00:31:08.347 killing process with pid 119400 00:31:08.348 Received shutdown signal, test time was about 1.000000 seconds 00:31:08.348 00:31:08.348 Latency(us) 00:31:08.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.348 =================================================================================================================== 00:31:08.348 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119400' 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@967 -- # kill 119400 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@972 -- # wait 119400 00:31:08.348 07:17:16 keyring_file -- keyring/file.sh@21 -- # killprocess 118911 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 118911 ']' 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 118911 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118911 00:31:08.348 killing process with pid 118911 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118911' 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@967 -- # kill 118911 00:31:08.348 [2024-07-13 07:17:16.402290] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:08.348 07:17:16 keyring_file -- common/autotest_common.sh@972 -- # wait 118911 00:31:08.914 00:31:08.914 real 0m14.821s 00:31:08.914 user 0m35.822s 00:31:08.914 sys 0m3.400s 00:31:08.914 07:17:16 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:08.914 ************************************ 00:31:08.914 END TEST keyring_file 00:31:08.914 07:17:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:08.914 ************************************ 00:31:08.914 07:17:16 -- common/autotest_common.sh@1142 -- # return 0 00:31:08.914 07:17:16 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:08.914 07:17:16 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:08.914 07:17:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:08.914 07:17:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.914 07:17:16 -- common/autotest_common.sh@10 -- # set +x 00:31:08.914 ************************************ 00:31:08.914 START TEST keyring_linux 00:31:08.914 ************************************ 00:31:08.914 07:17:16 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:09.173 * Looking for test storage... 00:31:09.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:09.173 07:17:17 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:09.173 07:17:17 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:09.173 07:17:17 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:09.173 07:17:17 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.173 07:17:17 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.173 07:17:17 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43021b44-defc-4eee-995c-65b6e79138bd 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=43021b44-defc-4eee-995c-65b6e79138bd 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:09.174 07:17:17 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.174 07:17:17 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.174 07:17:17 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.174 07:17:17 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.174 07:17:17 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.174 07:17:17 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.174 07:17:17 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:09.174 07:17:17 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:09.174 /tmp/:spdk-test:key0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:09.174 07:17:17 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:09.174 /tmp/:spdk-test:key1 00:31:09.174 07:17:17 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=119555 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:09.174 07:17:17 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 119555 00:31:09.174 07:17:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119555 ']' 00:31:09.174 07:17:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.174 07:17:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.174 07:17:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.174 07:17:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.174 07:17:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:09.174 [2024-07-13 07:17:17.209770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:09.174 [2024-07-13 07:17:17.209899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119555 ] 00:31:09.433 [2024-07-13 07:17:17.346635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.433 [2024-07-13 07:17:17.434224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:10.369 07:17:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 [2024-07-13 07:17:18.131442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.369 null0 00:31:10.369 [2024-07-13 07:17:18.163392] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:10.369 [2024-07-13 07:17:18.163674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.369 07:17:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:10.369 1022402698 00:31:10.369 07:17:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:10.369 514624170 00:31:10.369 07:17:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=119591 00:31:10.369 07:17:18 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:10.369 07:17:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 119591 /var/tmp/bperf.sock 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 119591 ']' 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:10.369 07:17:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 [2024-07-13 07:17:18.243064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:10.369 [2024-07-13 07:17:18.243130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119591 ] 00:31:10.369 [2024-07-13 07:17:18.378919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.628 [2024-07-13 07:17:18.467309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.196 07:17:19 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:11.196 07:17:19 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:11.196 07:17:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:11.196 07:17:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:11.455 07:17:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:11.455 07:17:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:11.715 07:17:19 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:11.715 07:17:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:11.974 [2024-07-13 07:17:19.900718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:11.974 nvme0n1 00:31:11.974 07:17:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:11.974 07:17:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:11.974 07:17:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:11.974 07:17:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:11.974 07:17:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:11.974 07:17:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.248 07:17:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:12.248 07:17:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:12.248 07:17:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:12.248 07:17:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:12.248 07:17:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:12.248 07:17:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.248 07:17:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@25 -- # sn=1022402698 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 1022402698 == \1\0\2\2\4\0\2\6\9\8 ]] 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1022402698 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:12.564 07:17:20 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:12.823 Running I/O for 1 seconds... 00:31:13.760 00:31:13.760 Latency(us) 00:31:13.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.760 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:13.760 nvme0n1 : 1.01 13650.99 53.32 0.00 0.00 9329.09 6940.86 16562.73 00:31:13.760 =================================================================================================================== 00:31:13.760 Total : 13650.99 53.32 0.00 0.00 9329.09 6940.86 16562.73 00:31:13.760 0 00:31:13.760 07:17:21 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:13.760 07:17:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:14.019 07:17:21 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:14.019 07:17:21 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:14.019 07:17:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:14.019 07:17:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:14.019 07:17:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.019 07:17:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:14.278 07:17:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:14.278 07:17:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:14.278 07:17:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:14.278 07:17:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:14.278 07:17:22 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:14.278 07:17:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:14.537 [2024-07-13 07:17:22.487499] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:14.537 [2024-07-13 07:17:22.487717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb15b0 (107): Transport endpoint is not connected 00:31:14.537 [2024-07-13 07:17:22.488703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb15b0 (9): Bad file descriptor 00:31:14.537 [2024-07-13 07:17:22.489700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.537 [2024-07-13 07:17:22.489717] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:14.537 [2024-07-13 07:17:22.489727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.538 2024/07/13 07:17:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:14.538 request: 00:31:14.538 { 00:31:14.538 "method": "bdev_nvme_attach_controller", 00:31:14.538 "params": { 00:31:14.538 "name": "nvme0", 00:31:14.538 "trtype": "tcp", 00:31:14.538 "traddr": "127.0.0.1", 00:31:14.538 "adrfam": "ipv4", 00:31:14.538 "trsvcid": "4420", 00:31:14.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.538 "prchk_reftag": false, 00:31:14.538 "prchk_guard": false, 00:31:14.538 "hdgst": false, 00:31:14.538 "ddgst": false, 00:31:14.538 "psk": ":spdk-test:key1" 00:31:14.538 } 00:31:14.538 } 00:31:14.538 Got JSON-RPC error response 00:31:14.538 GoRPCClient: error on JSON-RPC call 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@33 -- # sn=1022402698 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1022402698 00:31:14.538 1 links removed 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@33 -- # sn=514624170 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 514624170 00:31:14.538 1 links removed 00:31:14.538 07:17:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 119591 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119591 ']' 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119591 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119591 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:14.538 killing process with pid 119591 00:31:14.538 Received shutdown signal, test time was about 1.000000 seconds 00:31:14.538 00:31:14.538 Latency(us) 00:31:14.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.538 =================================================================================================================== 00:31:14.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119591' 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@967 -- # kill 119591 00:31:14.538 07:17:22 keyring_linux -- common/autotest_common.sh@972 -- # wait 119591 00:31:14.797 07:17:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 119555 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 119555 ']' 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 119555 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119555 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:14.797 killing process with pid 119555 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119555' 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@967 -- # kill 119555 00:31:14.797 07:17:22 keyring_linux -- common/autotest_common.sh@972 -- # wait 119555 00:31:15.362 00:31:15.362 real 0m6.317s 00:31:15.362 user 0m11.797s 00:31:15.362 sys 0m1.853s 00:31:15.362 07:17:23 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:15.362 07:17:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:15.362 ************************************ 00:31:15.362 END TEST keyring_linux 00:31:15.362 ************************************ 00:31:15.362 07:17:23 -- common/autotest_common.sh@1142 -- # return 0 00:31:15.362 07:17:23 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:15.362 07:17:23 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:15.362 07:17:23 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:15.362 07:17:23 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:15.362 07:17:23 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:15.362 07:17:23 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:15.362 07:17:23 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:15.362 07:17:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:15.362 07:17:23 -- common/autotest_common.sh@10 -- # set +x 00:31:15.362 07:17:23 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:15.362 07:17:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:15.362 07:17:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:15.362 07:17:23 -- common/autotest_common.sh@10 -- # set +x 00:31:16.738 INFO: APP EXITING 00:31:16.738 INFO: killing all VMs 00:31:16.738 INFO: killing vhost app 00:31:16.738 INFO: EXIT DONE 00:31:17.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:17.674 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:17.674 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:18.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:18.243 Cleaning 00:31:18.243 Removing: /var/run/dpdk/spdk0/config 00:31:18.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:18.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:18.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:18.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:18.243 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:18.243 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:18.243 Removing: /var/run/dpdk/spdk1/config 00:31:18.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:18.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:18.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:18.243 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:18.243 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:18.243 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:18.243 Removing: /var/run/dpdk/spdk2/config 00:31:18.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:18.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:18.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:18.243 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:18.243 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:18.243 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:18.243 Removing: /var/run/dpdk/spdk3/config 00:31:18.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:18.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:18.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:18.243 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:18.243 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:18.243 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:18.502 Removing: /var/run/dpdk/spdk4/config 00:31:18.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:18.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:18.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:18.502 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:18.502 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:18.502 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:18.502 Removing: /dev/shm/nvmf_trace.0 00:31:18.502 Removing: /dev/shm/spdk_tgt_trace.pid73020 00:31:18.502 Removing: /var/run/dpdk/spdk0 00:31:18.502 Removing: /var/run/dpdk/spdk1 00:31:18.502 Removing: /var/run/dpdk/spdk2 00:31:18.502 Removing: /var/run/dpdk/spdk3 00:31:18.502 Removing: /var/run/dpdk/spdk4 00:31:18.502 Removing: /var/run/dpdk/spdk_pid100119 00:31:18.502 Removing: /var/run/dpdk/spdk_pid100267 00:31:18.502 Removing: /var/run/dpdk/spdk_pid100531 00:31:18.502 Removing: /var/run/dpdk/spdk_pid100648 00:31:18.502 Removing: /var/run/dpdk/spdk_pid100897 00:31:18.502 Removing: /var/run/dpdk/spdk_pid101010 00:31:18.502 Removing: /var/run/dpdk/spdk_pid101145 00:31:18.502 Removing: /var/run/dpdk/spdk_pid101481 00:31:18.502 Removing: /var/run/dpdk/spdk_pid101860 00:31:18.502 Removing: /var/run/dpdk/spdk_pid101862 00:31:18.502 Removing: /var/run/dpdk/spdk_pid104077 00:31:18.502 Removing: /var/run/dpdk/spdk_pid104384 00:31:18.502 Removing: /var/run/dpdk/spdk_pid104876 00:31:18.502 Removing: /var/run/dpdk/spdk_pid104879 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105224 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105238 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105259 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105284 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105290 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105435 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105437 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105544 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105547 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105650 00:31:18.502 Removing: /var/run/dpdk/spdk_pid105652 00:31:18.502 Removing: /var/run/dpdk/spdk_pid106128 00:31:18.502 Removing: /var/run/dpdk/spdk_pid106171 00:31:18.502 Removing: /var/run/dpdk/spdk_pid106328 00:31:18.502 Removing: /var/run/dpdk/spdk_pid106443 00:31:18.502 Removing: /var/run/dpdk/spdk_pid106830 00:31:18.502 Removing: /var/run/dpdk/spdk_pid107080 00:31:18.502 Removing: /var/run/dpdk/spdk_pid107569 00:31:18.502 Removing: /var/run/dpdk/spdk_pid108149 00:31:18.502 Removing: /var/run/dpdk/spdk_pid109483 00:31:18.502 Removing: /var/run/dpdk/spdk_pid110076 00:31:18.502 Removing: /var/run/dpdk/spdk_pid110078 00:31:18.502 Removing: /var/run/dpdk/spdk_pid111997 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112082 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112173 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112259 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112417 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112507 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112592 00:31:18.502 Removing: /var/run/dpdk/spdk_pid112684 00:31:18.502 Removing: /var/run/dpdk/spdk_pid113031 00:31:18.502 Removing: /var/run/dpdk/spdk_pid113715 00:31:18.502 Removing: /var/run/dpdk/spdk_pid115056 00:31:18.502 Removing: /var/run/dpdk/spdk_pid115252 00:31:18.502 Removing: /var/run/dpdk/spdk_pid115538 00:31:18.503 Removing: /var/run/dpdk/spdk_pid115831 00:31:18.503 Removing: /var/run/dpdk/spdk_pid116372 00:31:18.503 Removing: /var/run/dpdk/spdk_pid116377 00:31:18.503 Removing: /var/run/dpdk/spdk_pid116721 00:31:18.503 Removing: /var/run/dpdk/spdk_pid116876 00:31:18.503 Removing: /var/run/dpdk/spdk_pid117028 00:31:18.503 Removing: /var/run/dpdk/spdk_pid117125 00:31:18.503 Removing: /var/run/dpdk/spdk_pid117331 00:31:18.503 Removing: /var/run/dpdk/spdk_pid117439 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118101 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118142 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118172 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118427 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118458 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118488 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118911 00:31:18.503 Removing: /var/run/dpdk/spdk_pid118945 00:31:18.503 Removing: /var/run/dpdk/spdk_pid119400 00:31:18.503 Removing: /var/run/dpdk/spdk_pid119555 00:31:18.503 Removing: /var/run/dpdk/spdk_pid119591 00:31:18.503 Removing: /var/run/dpdk/spdk_pid72875 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73020 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73281 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73379 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73418 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73528 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73558 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73676 00:31:18.762 Removing: /var/run/dpdk/spdk_pid73956 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74132 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74215 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74307 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74397 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74435 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74465 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74527 00:31:18.762 Removing: /var/run/dpdk/spdk_pid74644 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75257 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75321 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75392 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75426 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75505 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75535 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75614 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75642 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75699 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75729 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75775 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75805 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75957 00:31:18.762 Removing: /var/run/dpdk/spdk_pid75987 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76066 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76137 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76161 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76220 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76254 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76289 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76324 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76362 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76398 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76431 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76467 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76500 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76536 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76565 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76605 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76634 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76668 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76703 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76732 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76772 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76804 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76847 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76876 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76917 00:31:18.762 Removing: /var/run/dpdk/spdk_pid76981 00:31:18.762 Removing: /var/run/dpdk/spdk_pid77092 00:31:18.762 Removing: /var/run/dpdk/spdk_pid77516 00:31:18.762 Removing: /var/run/dpdk/spdk_pid84266 00:31:18.762 Removing: /var/run/dpdk/spdk_pid84609 00:31:18.762 Removing: /var/run/dpdk/spdk_pid87024 00:31:18.762 Removing: /var/run/dpdk/spdk_pid87396 00:31:18.762 Removing: /var/run/dpdk/spdk_pid87658 00:31:18.762 Removing: /var/run/dpdk/spdk_pid87704 00:31:18.762 Removing: /var/run/dpdk/spdk_pid88322 00:31:18.762 Removing: /var/run/dpdk/spdk_pid88757 00:31:18.762 Removing: /var/run/dpdk/spdk_pid88807 00:31:18.762 Removing: /var/run/dpdk/spdk_pid89169 00:31:18.762 Removing: /var/run/dpdk/spdk_pid89692 00:31:18.762 Removing: /var/run/dpdk/spdk_pid90115 00:31:18.762 Removing: /var/run/dpdk/spdk_pid91081 00:31:18.762 Removing: /var/run/dpdk/spdk_pid92060 00:31:18.762 Removing: /var/run/dpdk/spdk_pid92182 00:31:18.762 Removing: /var/run/dpdk/spdk_pid92244 00:31:18.762 Removing: /var/run/dpdk/spdk_pid93717 00:31:18.762 Removing: /var/run/dpdk/spdk_pid93937 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99138 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99573 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99676 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99823 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99873 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99921 00:31:18.762 Removing: /var/run/dpdk/spdk_pid99961 00:31:18.762 Clean 00:31:19.021 07:17:26 -- common/autotest_common.sh@1451 -- # return 0 00:31:19.021 07:17:26 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:19.021 07:17:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:19.021 07:17:26 -- common/autotest_common.sh@10 -- # set +x 00:31:19.021 07:17:26 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:19.021 07:17:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:19.021 07:17:26 -- common/autotest_common.sh@10 -- # set +x 00:31:19.021 07:17:26 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:19.021 07:17:26 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:19.021 07:17:26 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:19.021 07:17:26 -- spdk/autotest.sh@391 -- # hash lcov 00:31:19.021 07:17:26 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:19.021 07:17:26 -- spdk/autotest.sh@393 -- # hostname 00:31:19.021 07:17:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:19.280 geninfo: WARNING: invalid characters removed from testname! 00:31:41.199 07:17:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:43.734 07:17:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:45.635 07:17:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:48.163 07:17:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.691 07:17:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:52.592 07:18:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.227 07:18:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:55.227 07:18:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:55.227 07:18:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:55.227 07:18:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.227 07:18:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.227 07:18:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.227 07:18:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.227 07:18:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.227 07:18:03 -- paths/export.sh@5 -- $ export PATH 00:31:55.227 07:18:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.227 07:18:03 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:55.227 07:18:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:55.227 07:18:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720855083.XXXXXX 00:31:55.227 07:18:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720855083.zh4GkT 00:31:55.227 07:18:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:55.227 07:18:03 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:31:55.227 07:18:03 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:31:55.227 07:18:03 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:31:55.227 07:18:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:55.228 07:18:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:55.228 07:18:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:55.228 07:18:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:55.228 07:18:03 -- common/autotest_common.sh@10 -- $ set +x 00:31:55.228 07:18:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:31:55.228 07:18:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:55.228 07:18:03 -- pm/common@17 -- $ local monitor 00:31:55.228 07:18:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:55.228 07:18:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:55.228 07:18:03 -- pm/common@25 -- $ sleep 1 00:31:55.228 07:18:03 -- pm/common@21 -- $ date +%s 00:31:55.228 07:18:03 -- pm/common@21 -- $ date +%s 00:31:55.228 07:18:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720855083 00:31:55.228 07:18:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720855083 00:31:55.228 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720855083_collect-vmstat.pm.log 00:31:55.228 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720855083_collect-cpu-load.pm.log 00:31:56.162 07:18:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:56.162 07:18:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:56.162 07:18:04 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:56.162 07:18:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:56.162 07:18:04 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:56.162 07:18:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:56.162 07:18:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:56.162 07:18:04 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:56.162 07:18:04 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:56.162 07:18:04 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:56.421 07:18:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:56.421 07:18:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:56.421 07:18:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:56.421 07:18:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:56.421 07:18:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:56.421 07:18:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:56.421 07:18:04 -- pm/common@44 -- $ pid=121330 00:31:56.421 07:18:04 -- pm/common@50 -- $ kill -TERM 121330 00:31:56.421 07:18:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:56.421 07:18:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:56.421 07:18:04 -- pm/common@44 -- $ pid=121332 00:31:56.421 07:18:04 -- pm/common@50 -- $ kill -TERM 121332 00:31:56.421 + [[ -n 5894 ]] 00:31:56.421 + sudo kill 5894 00:31:56.431 [Pipeline] } 00:31:56.450 [Pipeline] // timeout 00:31:56.456 [Pipeline] } 00:31:56.474 [Pipeline] // stage 00:31:56.480 [Pipeline] } 00:31:56.497 [Pipeline] // catchError 00:31:56.507 [Pipeline] stage 00:31:56.510 [Pipeline] { (Stop VM) 00:31:56.524 [Pipeline] sh 00:31:56.803 + vagrant halt 00:32:00.087 ==> default: Halting domain... 00:32:06.693 [Pipeline] sh 00:32:06.970 + vagrant destroy -f 00:32:10.255 ==> default: Removing domain... 00:32:10.267 [Pipeline] sh 00:32:10.544 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:32:10.552 [Pipeline] } 00:32:10.570 [Pipeline] // stage 00:32:10.575 [Pipeline] } 00:32:10.597 [Pipeline] // dir 00:32:10.602 [Pipeline] } 00:32:10.620 [Pipeline] // wrap 00:32:10.625 [Pipeline] } 00:32:10.640 [Pipeline] // catchError 00:32:10.649 [Pipeline] stage 00:32:10.651 [Pipeline] { (Epilogue) 00:32:10.665 [Pipeline] sh 00:32:10.944 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:16.221 [Pipeline] catchError 00:32:16.223 [Pipeline] { 00:32:16.238 [Pipeline] sh 00:32:16.518 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:16.776 Artifacts sizes are good 00:32:16.788 [Pipeline] } 00:32:16.810 [Pipeline] // catchError 00:32:16.825 [Pipeline] archiveArtifacts 00:32:16.833 Archiving artifacts 00:32:17.016 [Pipeline] cleanWs 00:32:17.029 [WS-CLEANUP] Deleting project workspace... 00:32:17.029 [WS-CLEANUP] Deferred wipeout is used... 00:32:17.037 [WS-CLEANUP] done 00:32:17.039 [Pipeline] } 00:32:17.062 [Pipeline] // stage 00:32:17.068 [Pipeline] } 00:32:17.089 [Pipeline] // node 00:32:17.095 [Pipeline] End of Pipeline 00:32:17.133 Finished: SUCCESS